Can I ignore these query exceptions in my ElasticSearch log? - elasticsearch

I have a large number of indices in my ES instance, and I have noticed that the log files are growing rather large. The ElasticSearch Chef cookbook by default sets the log level to DEBUG and this has resulted in millions of error messages being written into the log. Please see this one as an example:
[2015-02-20 18:42:28,858][DEBUG][action.search.type ] [SEARCHNODE] [child_index][4], node[xxxx], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest#1a9a62ad] lastShard [true]
org.elasticsearch.search.SearchParseException: [ichild_index][4]: from[0],size[105]: Parse Failure [Failed to parse source [{"from":0,"size":105,"sort":{"lastmodified":{"order":"desc","missing":"_last"}},"query":{"indices":{"indices":["main_index"],"query":{"filtered":{"query":{"bool":{"must":[{"match_all":{}}]}},"filter":{"and":{"filters":[{"term":{"isclosed":false}},{"or":[{"and":[{"type":{"value":"type_name"}}]}]},{"term":{"planid":1454}},{"bool":{"should":[{"terms":{"roles":[173,935,934,937,930,938,936]}},{"missing":{"field":"roles"}}]}}]}}}},"no_match_query":"none"}},"fields":"[]"}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:257)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.SearchParseException: [child_index][4]: from[0],size[105]: Parse Failure [No mapping found for [lastmodified] in order to sort on]
at org.elasticsearch.search.sort.SortParseElement.addSortField(SortParseElement.java:198)
at org.elasticsearch.search.sort.SortParseElement.addCompoundSortField(SortParseElement.java:172)
at org.elasticsearch.search.sort.SortParseElement.parse(SortParseElement.java:90)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:644)
The query in the error message contains this fragment:
... {"indices":{"indices":["main_index"] ...
However, the error actually originates from child_index. I'm not sure why my instance would even consider child_index to execute the query on as we clearly don't want to consider that index as per the query.
The query above is actually executed successfully. Results are returned correctly, we don't log anything on the web application that indicates a problem. Presumably the query is at some point run against main_index as well and the results are correctly returned to the web app.
My instance is under a moderate workload and this file can comfortably grow to 5gb in a given 12 hour period. I know that the solution to that problem is simple: decrease the log level to WARN and the errors will go away. However, I'm worried that we might have a hitherto undiagnosed problem with the instance that could bite us later.

Of all the errors to ignore, org.elasticsearch.search.SearchParseException is probably the one you should never ignore. It means that ES was unable to parse your search JSON as it expects to be able to (as far as I can tell).
I took a look at your JSON, and although it lints it appears your "fields" array is actually "fields": "[]" which could be what's causing the issue. Can you try without the quotes and see what happens?
Theory, but it's possible it fails to parse that section and so just ignores it (which should result in the same thing as if it were parsed in this case).

Related

Elasticsearch sometimes failing to execute fetch phase

Currently, all search queries that are passed into ES are truncated to 1000 characters. Regardless, every so often we get this error:
[2019-10-08T15:44:08,126][DEBUG][o.e.a.s.TransportSearchAction] [zir05M0] [135539946] Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [zir05M0][__IP__][__PATH__[__PATH__]]
Caused by: java.lang.IllegalArgumentException: This builder doesn't allow terms that are larger than 1,000 characters, got <some really long string of text>
I'm a novice when it comes to Elasticsearch and really don't know where to begin in trying to debug this. My only guess is that a more_like_this query is using a field that's too long, but I can't confirm that. If anyone has any hints that could get me going down the right track would be greatly appreciated.

ElasticSearch randomly fails when running tests

I have a test ElasticSearch box (2.3.0) and my tests that are using ES are failing in random order which is really frustrating (failed with All shards failed exception).
Looking at the elastic_search.log file it only showed me this
[2017-05-04 04:19:15,990][DEBUG][action.search.type ] [es-testing-1] All shards failed for phase: [query]
RemoteTransportException[[es-testing-1][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];
Caused by: [derp_test][[derp_test][3]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]
at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:993)
at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:814)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:641)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:618)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:369)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Any idea what's going on? So far my research only told me this is most likely due to corrupt translog -- but I don't think deleting translog will help because the test drops the test index for every namespace
ES test box has 3.5GB RAM and it's using 2.5GB heap size, CPU usage is quite normal during the test (peaked at 15%)
To clarify: when I said failing test, I meant error with the weird exception as mentioned above (not failing test due to incorrect value). I did manual refresh after every insert/update operation so value is correct.
After investigating ElasticSearch log file (at DEBUG level) and the source code, turns out what actually happened was that after index is created, the shards are entering RECOVERING state and sometimes my test tried to perform a query on ElasticSearch while the shards are not yet active -- thus the exception.
Fix is simple - after creating an index, just wait until shards are active using setWaitForActiveShards function and to be more paranoid I also added setWaitForYellowStatus
It's a recommendation use ESIntegTestCase to do the integration test.
ESIntegTestCase has some helper method, like: ensureGreen and refresh ... to ensure the Elasticsearch is ready to continue testing. and you can configure node settings for test.
if use Elasticsearch directly as a test box, it maybe cause various problems:
like your Exception, this seems it's recovering shards for index
derp_test.
even you have indexed your data into index, but when you immediately search will fail, since cluster need flush or refresh
...
Those most problems can just use Thread.sleep to wait some time to fix :), but it's a bad way to do this.
Try manually refreshing your indices after inserting the data and before performing a query to ensure the data is searchable.
Either:
As part of the index request - https://www.elastic.co/guide/en/elasticsearch/reference/2.3/docs-index_.html#index-refresh
Or separately - https://www.elastic.co/guide/en/elasticsearch/reference/2.3/indices-refresh.html
There could be another reason. I had the same problem with my elasticsearch unit tests, at first I thought the problem root cause is somewhere in .Net Core or Nest or elsewhere outside of my code because the test would run successfully in Debug mode (When debugging tests) but randomly failed in Release mode (when running tests).
After lots of investigations and many try and errors, I found out the problem root cause (in my case) was Concurrency !! or on the other hand Race Condition used to happen
Since the tests run concurrently and I used to recreate and seed my index (initializing and preparing) on test class constructor which means executing on the beginning of every test and since the tests would run concurrently, race condition were likely to happen and make my tests fail
Here is my initialization code that caused tests fail randomly when running them (on release mode)
public BaseElasticDataTest(RootFixture fixture)
: base(fixture)
{
ElasticHelper = fixture.Builder.Build<ElasticDataProvider<FakePersonIndex>();
deleteFakeIndex();
createFakeIndex();
fillFakeIndexData();
}
the code above used to run on every test concurrently. I fixed my problem by executing initialization code only once per test class (once for all the test cases inside the test class) and the problem went away.
Here is my fixed test class constructor code :
static bool initialized = false;
public BaseElasticDataTest(RootFixture fixture)
: base(fixture)
{
ElasticHelper = fixture.Builder.Build<ElasticDataProvider<FakePersonIndex>>();
if (!initialized)
{
deleteFakeIndex();
createFakeIndex();
fillFakeIndexData();
//for concurrency
System.Threading.Thread.Sleep(100);
initialized = true;
}
}
Hope it helps

Spark Streaming and ElasticSearch - Could not write all entries

I'm currently writing a Scala application made of a Producer and a Consumer. The Producers get some data from and external source and writes em inside Kafka. The Consumer reads from Kafka and writes to Elasticsearch.
The consumer is based on Spark Streaming and every 5 seconds fetches new messages from Kafka and writes them to ElasticSearch. The problem is I'm not able to write to ES because I get a lot of errors like the one below :
ERROR] [2015-04-24 11:21:14,734] [org.apache.spark.TaskContextImpl]:
Error in TaskCompletionListener
org.elasticsearch.hadoop.EsHadoopException: Could not write all
entries [3/26560] (maybe ES was overloaded?). Bailing out... at
org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:225)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.elasticsearch.hadoop.rest.RestRepository.close(RestRepository.java:236)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.elasticsearch.hadoop.rest.RestService$PartitionWriter.close(RestService.java:125)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.elasticsearch.spark.rdd.EsRDDWriter$$anonfun$write$1.apply$mcV$sp(EsRDDWriter.scala:33)
~[elasticsearch-spark_2.10-2.1.0.Beta3.jar:2.1.0.Beta3] at
org.apache.spark.TaskContextImpl$$anon$2.onTaskCompletion(TaskContextImpl.scala:57)
~[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:68)
[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:66)
[spark-core_2.10-1.2.1.jar:1.2.1] at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
[na:na] at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
[na:na] at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:66)
[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.scheduler.Task.run(Task.scala:58)
[spark-core_2.10-1.2.1.jar:1.2.1] at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
[spark-core_2.10-1.2.1.jar:1.2.1] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_65] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_65] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Consider that the producer is writing 6 messages every 15 seconds so I really don't understand how this "overload" can possibly happen (I even cleaned the topic and flushed all old messages, I thought it was related to an offset issue). The task executed by Spark Streaming every 5 seconds can be summarized by the following code :
val result = KafkaUtils.createStream[String, Array[Byte], StringDecoder, DefaultDecoder](ssc, kafkaParams, Map("wasp.raw" -> 1), StorageLevel.MEMORY_ONLY_SER_2)
val convertedResult = result.map(k => (k._1 ,AvroToJsonUtil.avroToJson(k._2)))
//TO-DO : Remove resource (yahoo/yahoo) hardcoded parameter
log.info(s"*** EXECUTING SPARK STREAMING TASK + ${java.lang.System.currentTimeMillis()}***")
convertedResult.foreachRDD(rdd => {
rdd.map(data => data._2).saveToEs("yahoo/yahoo", Map("es.input.json" -> "true"))
})
If I try to print the messages instead of sending to ES, everything is fine and I actually see only 6 messages. Why can't I write to ES?
For the sake of completeness, I'm using this library to write to ES : elasticsearch-spark_2.10 with the latest beta version.
I found, after many retries, a way to write to ElasticSearch without getting any error. Basically passing the parameter "es.batch.size.entries" -> "1" to the saveToES method solved the problem. I don't understand why using the default or any other batch size leads to the aforementioned error considering that I would expect an error message if I'm trying to write more stuff than the allowed max batch size, not less.
Moreover I've noticed that actually I was writing to ES but not all my messages, I was losing between 1 and 3 messages per batch.
When I pushed dataframe to ES on Spark, I had the same error message. Even with "es.batch.size.entries" -> "1" configuration,I had the same error.
Once I increased thread pool in ES, I could figure out this issue.
for example,
Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 600
threadpool.bulk.queue_size: 30000
Like it was already mentioned here, this is a document write conflict.
Your convertedResult data stream contains multiple records with the same id. When written to elastic as part of the same batch produces the error above.
Possible solutions:
Generate unique id for each record. Depending on your use case it can be done in a few different ways. As example, one common solution is to create a new field by combining the id and lastModifiedDate fields and use that field as id when writing to elastic.
Perform de-duplication of records based on id - select only one record with particular id and discard other duplicates. Depending on your use case, this could be the most current record (based on time stamp field), most complete (most of the fields contain data), etc.
The #1 solution will store all records that you receive in the stream.
The #2 solution will store only the unique records for a specific id based on your de-duplication logic. This result would be the same as setting "es.batch.size.entries" -> "1", except you will not limit the performance by writing one record at a time.
One of the possibility is the cluster/shard status being RED. Please address this issue which may be due to unassigned replicas. Once status turned GREEN the API call succeeded just fine.
This is a document write conflict.
For example:
Multiple documents specify the same _id for Elasticsearch to use.
These documents are located in different partitions.
Spark writes multiple partitions to ES simultaneously.
Result is Elasticsearch receiving multiple updates for a single Document at once - from multiple sources / through multiple nodes / containing different data
"I was losing between 1 and 3 messages per batch."
Fluctuating number of failures when batch size > 1
Success if batch write size "1"
Just adding another potential reason for this error, hopefully it helps someone.
If your Elasticsearch index has child documents then:
if you are using a custom routing field (not _id), then according to
the documentation the uniqueness of the documents is not guaranteed.
This might cause issues while updating from spark.
If you are using the standard _id, the uniqueness will be preserved, however you need to make sure the following options are provided while writing from Spark to Elasticsearch:
es.mapping.join
es.mapping.routing

How to Fix Read timed out in Elasticsearch

I used Elasticsearch-1.1.0 to index tweets.
The indexing process is okay.
Then I upgraded the version. Now I use Elasticsearch-1.3.2, and I get this message randomly:
Exception happened: Error raised when there was an exception while talking to ES.
ConnectionError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)) caused by: ReadTimeoutError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)).
Snapshot of the randomness:
Happened --33s-- Happened --27s-- Happened --22s-- Happened --10s-- Happened --39s-- Happened --25s-- Happened --36s-- Happened --38s-- Happened --19s-- Happened --09s-- Happened --33s-- Happened --16s-- Happened
--XXs-- = after XX seconds
Can someone point out on how to fix the Read timed out problem?
Thank you very much.
Its hard to give a direct answer since the error your seeing might be associated with the client you are using. However a solution might be one of the following:
1.Increase the default timeout Globally when you create the ES client by passing the timeout parameter. Example in Python
es = Elasticsearch(timeout=30)
2.Set the timeout per request made by the client. Taken from Elasticsearch Python docs below.
# only wait for 1 second, regardless of the client's default
es.cluster.health(wait_for_status='yellow', request_timeout=1)
The above will give the cluster some extra time to respond
Try this:
es = Elasticsearch(timeout=30, max_retries=10, retry_on_timeout=True)
It might won't fully avoid ReadTimeoutError, but it minimalize them.
Read timeouts can also happen when query size is large. For example, in my case of a pretty large ES index size (> 3M documents), doing a search for a query with 30 words took around 2 seconds, while doing a search for a query with 400 words took over 18 seconds. So for a sufficiently large query even timeout=30 won't save you. An easy solution is to crop the query to the size that can be answered below the timeout.
For what it's worth, I found that this seems to be related to a broken index state.
It's very difficult to reliably recreate this issue, but I've seen it several times; operations run as normal except certain ones which periodically seem to hang ES (specifically refreshing an index it seems).
Deleting an index (curl -XDELETE http://localhost:9200/foo) and reindexing from scratch fixed this for me.
I recommend periodically clearing and reindexing if you see this behaviour.
Increasing various timeout options may immediately resolve issues, but does not address the root cause.
Provided the ElasticSearch service is available and the indexes are healthy, try increasing the the Java minimum and maximum heap sizes: see https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html .
TL;DR Edit /etc/elasticsearch/jvm.options -Xms1g and -Xmx1g
You also should check if all fine with elastic. Some shard can be unavailable, here is nice doc about possible reasons of unavailable shard https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/

elasticsearch highlighting error, failed to highlight ... String index out of range

I cannot make head or tail of this error, and it's happening pretty randomly to where I don't even know where to start looking.
This is what the full error looks like
Tire::Search::SearchRequestFailed: 500 :
{
"error": "SearchPhaseExecutionException[Failed to execute phase [query_fetch], total failure;
shardFailures {[7McitJnjQkqLkViqUpZUyw][content][4]:
FetchPhaseExecutionException[[content][4]:
query[+_all:account +_all:set +_all:up],from[0],size[20]:
Fetch Failed [Failed to highlight field [post_content]]];
nested: StringIndexOutOfBoundsException[String index out of range: -5]; }]",
"status": 500
}
A query like
"relationship learning"
will run fine, but running
"relationship centered learning"
will throw the error, actually any of these letters c, d, j, q, x, z used with "relationship learning" .. like "d relationship learning" will throw the error.
Its truly maddening.
I'm running elasticsearch 19.2 with Tire
I just want to know where to start looking, any ideas will help.
This is a more complete explanation of the problem I'm having, it's exactly the same
As #imotov said above, this is a bug in lucene and therefore elasticsearch, https://issues.apache.org/jira/browse/LUCENE-4899
You can resolve it by not using the fast vector highlighter, or by setting fragment_size to a higher number to reduce incidences of the bug appearing.
I doubt that they will go away completely unless you set fragment_size to an impossibly high number, which you could do (in theory, but then you'd have to handle truncation on your own, which kind of defeats the purpose of the highlighter in the first place)

Resources