logstash elasticsearch unassigned shards - elasticsearch

I'm having trouble trying to figure out why this is happening and how to fix it:
Note that the first logstash index which has all shards assigned is only in that state because I manually assigned them.
Everything had been working as expected for several months until just today I noticed that shards have been laying around unassigned on all my logstash indexes.
Using Elasticsearch v1.4.0
My Elasticsearch logs appear as follows up until July 29, 2015:
[2015-07-29 00:01:53,352][DEBUG][discovery.zen.publish ] [Thor] received cluster state version 4827
Also, there is a script running on the server which trims the number of logstash indexes down to 30. I think that is where this type of log line comes from:
[2015-07-29 17:43:12,800][DEBUG][gateway.local.state.meta ] [Thor] [logstash-2015.01.25] deleting index that is no longer part of the metadata (indices: [[kibana-int, logstash-2015.07.11, logstash-2015.07.04, logstash-2015.07.03, logstash-2015.07.12, logstash-2015.07.21, logstash-2015.07.17, users_1416418645233, logstash-2015.07.20, logstash-2015.07.25, logstash-2015.07.06, logstash-2015.07.28, logstash-2015.01.24, logstash-2015.07.18, logstash-2015.07.26, logstash-2015.07.08, logstash-2015.07.19, logstash-2015.07.09, logstash-2015.07.22, logstash-2015.07.07, logstash-2015.07.29, logstash-2015.07.10, logstash-2015.07.05, logstash-2015.07.01, logstash-2015.07.16, logstash-2015.07.24, logstash-2015.07.02, logstash-2015.07.27, logstash-2015.07.14, logstash-2015.07.13, logstash-2015.07.23, logstash-2015.06.30, logstash-2015.07.15]])
On July 29, there are a few new entries which I haven't seen before:
[2015-07-29 00:01:38,024][DEBUG][action.bulk ] [Thor] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2015-07-29 17:12:58,658][INFO ][cluster.routing.allocation.decider] [Thor] updating [cluster.routing.allocation.enable] from [ALL] to [NONE]
[2015-07-29 17:12:58,658][INFO ][cluster.routing.allocation.decider] [Thor] updating [cluster.routing.allocation.disable_allocation] from [false] to [true]

Related

Having trouble starting elasticsearch-2.0

I have recently run into an issue where I am not able to start elasticsearch.
Version-2.0
OS: Linux
An error message was displayed
[ERROR][gateway ] [Node] failed to read local state, exiting...
ElasticsearchException[must specify numberOfShards for index [version]]; nested: IllegalArgumentException[must specify numberOfShards for index [version]];
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:163)
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:309)
at org.elasticsearch.gateway.MetaStateService.loadIndexState(MetaStateService.java:112)
at org.elasticsearch.gateway.MetaStateService.loadFullState(MetaStateService.java:97)
at org.elasticsearch.gateway.GatewayMetaState.loadMetaState(GatewayMetaState.java:97)
at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:223)
at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:85)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:56)
How to start the elasticsearch?
Edit 01:
I have put the shards and replicas in the elasticsearch.yml files but still not able to start elasticsearch.
Getting the above error message.
Edit 02:
I have added the shards to the yml file.
#################################### Index ####################################
# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See <http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>
# for more information.
# Set the number of shards (splits) of an index (5 by default):
#
index.number_of_shards: 2
# Set the number of replicas (additional copies) of an index (1 by default):
#
index.number_of_replicas: 1
We went ahead and restored the server to the prior date when elasticsearch was working , and the issue was resolved.

Duplicate and missing log entries with FluentBit and ES

We're using FluentBit to ship microservice logs into ES and recently found an issue on one of the environments: some log entries are duplicated (up to several hundred times) while other entries are missing in ES/Kibana but can be found in the microservice's container (kubectl logs my-pod -c my-service).
Each duplicate log entry has a unique _id and _fluentBitTimestamp so it really looks like the problem is on FluentBit's side.
FluentBit version is 1.5.6, the configuration is:
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Log_File /fluent-bit/log/fluent-bit.log
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/parsers_java.conf
[INPUT]
Name tail
Path /home/xng/log/*.log
Exclude_Path /home/xng/log/*.zip
Parser json
Buffer_Max_Size 128k
[FILTER]
Name record_modifier
Match *
Record hostname ${HOSTNAME}
[OUTPUT]
Name es
Match *
Host es-logging-service
Port 9210
Type flink-logs
Logstash_Format On
Logstash_Prefix test-env-logstash
Time_Key _fluentBitTimestamp
Any help would be much appreciated.
We had same problem
Can you try in your configuration
Write_operation upsert
So if log has duplicate _id it will update instead of create
Please note, Id_Key or Generate_ID is required in update, and upsert scenario.
https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch#write_operation

Elastic search - yellow status after open Kibana

I downloaded elasticsearch and open in console. ES worked good and had status green. Next I downloaded Kibana and open - now I have still yellow status, also if I stopped Kibana.
I have info:
[o.e.c.r.a.AllocationService] [4e84hhA] Cluster health status changed
from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
How can I fix it and where I can find more information for this error?
That's probably because the .kibana index has one replica shard and you have a single ES node running.
Run this and you'll get a GREEN status again:
PUT /.kibana/_settings
{
"index" : {
"number_of_replicas" : 0
}
}

Timeout on deleting a snapshot repository

I'm running elasticsearch 1.7.5 w/ 19 nodes (12 data nodes).
Attempting to setup snapshots for backup and recovery - but am getting a 503 on creation and deletion of a snapshot repository.
curl -XDELETE 'localhost:9200/_snapshot/backups?pretty'
returns:
{
"error" : "RemoteTransportException[[masternodename][inet[/10.0.0.20:9300]][cluster:admin/repository/delete]]; nested: ProcessClusterEventTimeoutException[failed to process cluster event (delete_repository [backups]) within 30s]; ",
"status" : 503
}
I was able to adjust the query w/ a master_timeout=10m - still getting a timeout. Is there a way to debug the cause of this request failing?
Performance on this call seems to be related to pending tasks with a higher priority.
https://discuss.elastic.co/t/timeout-on-deleting-a-snapshot-repository/69936/4

Elasticsearch 2.1.0

Elasticsearch version 2.1.0 is not connecting to kibana 4.3. I'm seeing failed to delete temp file error
[2015-12-10 08:20:30,891][INFO ][gateway ] [Mass Master] recovered [1] indices into cluster_state
[2015-12-10 08:20:31,219][WARN ][index.ny ranslog ] [Mass Master] [.kibana][0] failed to delete temp file /home/ec2-user/elasticsearch-2.1.0/data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-6795115948573540946.tlog
java.nio.file.NoSuchFileException: /home/ec2-user/elasticsearch-2.1.0/data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-6795115948573540946.tlog
Referred this link https://github.com/elastic/elasticsearch/pull/14872 not able to get it. Any help would be highly appreciated.
Try using kibana and elasticsearch of same versions. E.g. kibana 4.3 for ES 4.3. Version incompatibility might be an issue in your case.

Resources