Elastic search - yellow status after open Kibana - elasticsearch

I downloaded elasticsearch and open in console. ES worked good and had status green. Next I downloaded Kibana and open - now I have still yellow status, also if I stopped Kibana.
I have info:
[o.e.c.r.a.AllocationService] [4e84hhA] Cluster health status changed
from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
How can I fix it and where I can find more information for this error?

That's probably because the .kibana index has one replica shard and you have a single ES node running.
Run this and you'll get a GREEN status again:
PUT /.kibana/_settings
{
"index" : {
"number_of_replicas" : 0
}
}

Related

Exception : No alive nodes found in your cluster

I have an issue according to elasticsearch, when I am running this command php artisan index:ambassadors inside docker, it gives me this exception.
**Exception : No alive nodes found in your cluster**
Here is my output.
Exception : No alive nodes found in your cluster
412/4119 [▓▓░░░░░░░░░░░░░░░░░░░░░░░░░░] 10%Exception : No alive nodes found in your cluster
824/4119 [▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░░░░] 20%Exception : No alive nodes found in your cluster
1236/4119 [▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░] 30%Exception : No alive nodes found in your cluster
1648/4119 [▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░] 40%Exception : No alive nodes found in your cluster
2472/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░] 60%Exception : No alive nodes found in your cluster
2884/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░] 70%Exception : No alive nodes found in your cluster
3296/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░] 80%Exception : No alive nodes found in your cluster
3997/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░] 97%Exception : No alive nodes found in your cluster
4119/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%Exception : No alive nodes found in your cluster
Also I have an error message in my elasticsearch container logs.
Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
/usr/share/elasticsearch/config/log4j2.properties.
Is there anyone who faced this issue before?

Elasticsearch is still initializing the kibana index. Deleting an index doesn't hеlp

I have two servers: Kibana is installed on one of them and Elasticsearch is installed on another one.
The Kibana's version is 4.5.4
The Elastic version is 2.3.1
Here is what I get when I start kibana
log [07:25:26.859] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [07:25:26.890] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [07:25:26.905] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [07:25:26.913] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [07:25:26.919] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [07:25:26.923] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [07:25:26.940] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [07:25:26.945] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
log [07:25:26.950] [error][status][plugin:elasticsearch] Status changed from yellow to red - Elasticsearch is still initializing the kibana index.
log [07:25:26.952] [info][listening] Server running at http://0.0.0.0:5601
According to this answer I delete an index file in elasticsearch.
Then I restart kibana and elastic, and here is what I get in the log, when I try to start kibana.
log [07:29:57.455] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [07:29:57.488] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [07:29:57.502] [error][elasticsearch] Request error, retrying -- connect ECONNREFUSED 10.205.102.36:9200
log [07:29:57.508] [warning][elasticsearch] Unable to revive connection: http://10.205.102.36:9200/
log [07:29:57.509] [warning][elasticsearch] No living connections
log [07:29:57.511] [error][status][plugin:elasticsearch] Status changed from yellow to red - Unable to connect to Elasticsearch at http://10.205.102.36:9200.
log [07:29:57.512] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [07:29:57.516] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [07:29:57.519] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [07:29:57.529] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [07:29:57.533] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [07:29:57.536] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
log [07:29:57.542] [info][listening] Server running at http://0.0.0.0:5601
log [07:30:05.102] [info][status][plugin:elasticsearch] Status changed from red to yellow - No existing Kibana index found
log [07:30:35.288] [error][status][plugin:elasticsearch] Status changed from yellow to red - Waiting for Kibana index ".kibana" to come online failed.
log [07:30:37.803] [error][status][plugin:elasticsearch] Status changed from red to red - Elasticsearch is still initializing the kibana index.
So when I delete an index, it creates it again and shows the same error. How can I resolve this problem?

Timeout on deleting a snapshot repository

I'm running elasticsearch 1.7.5 w/ 19 nodes (12 data nodes).
Attempting to setup snapshots for backup and recovery - but am getting a 503 on creation and deletion of a snapshot repository.
curl -XDELETE 'localhost:9200/_snapshot/backups?pretty'
returns:
{
"error" : "RemoteTransportException[[masternodename][inet[/10.0.0.20:9300]][cluster:admin/repository/delete]]; nested: ProcessClusterEventTimeoutException[failed to process cluster event (delete_repository [backups]) within 30s]; ",
"status" : 503
}
I was able to adjust the query w/ a master_timeout=10m - still getting a timeout. Is there a way to debug the cause of this request failing?
Performance on this call seems to be related to pending tasks with a higher priority.
https://discuss.elastic.co/t/timeout-on-deleting-a-snapshot-repository/69936/4

Kibana service running, but no response from browser

I have my kibana running normal but I can't open the link from browser.
Please find the Kibana logs below,
log [14:09:05.036] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [14:09:05.065] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [14:09:05.100] [info][status][plugin:shield] Status changed from uninitialized to green - Ready
log [14:09:05.103] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [14:09:05.111] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [14:09:05.116] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [14:09:05.118] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [14:09:05.128] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [14:09:05.132] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
log [14:09:05.136] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready
log [14:09:05.140] [info][listening] Server running at https://0.0.0.0:5600
I tried the elasticsearch by
curl localhost:9200
It shows,
{
"name" : "Scream",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.3",
"build_hash" : "218bdf10790eef486ff2c41a3df5cfa32dadcfde",
"build_timestamp" : "2016-05-17T15:40:04Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
but for kibana:
curl localhost:5600
curl: (52) Empty reply from server
Please find my kibana config,
port: 5600
# The host to bind the server to.
host: "0.0.0.0"
# The Elasticsearch instance to use for all your queries.
elasticsearch_url: "http://localhost:9200"
Problem is quite old but in case someone ends up here - make sure to use "https".
The fineprint does say that "ignore the self-signed certificate error" :)
I was stack in same issue.
I think the problem is occurred when you do port forwarding.
You have to set 0.0.0.0:5601 as a source port.
I ran the latest GitHub 5.5.0 install and when I tried to bring up the localhost, IE just said Kibana LOADING...
However, I installed Chrome and Kibana popped right up. So the solution is to use a different browser.

logstash elasticsearch unassigned shards

I'm having trouble trying to figure out why this is happening and how to fix it:
Note that the first logstash index which has all shards assigned is only in that state because I manually assigned them.
Everything had been working as expected for several months until just today I noticed that shards have been laying around unassigned on all my logstash indexes.
Using Elasticsearch v1.4.0
My Elasticsearch logs appear as follows up until July 29, 2015:
[2015-07-29 00:01:53,352][DEBUG][discovery.zen.publish ] [Thor] received cluster state version 4827
Also, there is a script running on the server which trims the number of logstash indexes down to 30. I think that is where this type of log line comes from:
[2015-07-29 17:43:12,800][DEBUG][gateway.local.state.meta ] [Thor] [logstash-2015.01.25] deleting index that is no longer part of the metadata (indices: [[kibana-int, logstash-2015.07.11, logstash-2015.07.04, logstash-2015.07.03, logstash-2015.07.12, logstash-2015.07.21, logstash-2015.07.17, users_1416418645233, logstash-2015.07.20, logstash-2015.07.25, logstash-2015.07.06, logstash-2015.07.28, logstash-2015.01.24, logstash-2015.07.18, logstash-2015.07.26, logstash-2015.07.08, logstash-2015.07.19, logstash-2015.07.09, logstash-2015.07.22, logstash-2015.07.07, logstash-2015.07.29, logstash-2015.07.10, logstash-2015.07.05, logstash-2015.07.01, logstash-2015.07.16, logstash-2015.07.24, logstash-2015.07.02, logstash-2015.07.27, logstash-2015.07.14, logstash-2015.07.13, logstash-2015.07.23, logstash-2015.06.30, logstash-2015.07.15]])
On July 29, there are a few new entries which I haven't seen before:
[2015-07-29 00:01:38,024][DEBUG][action.bulk ] [Thor] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2015-07-29 17:12:58,658][INFO ][cluster.routing.allocation.decider] [Thor] updating [cluster.routing.allocation.enable] from [ALL] to [NONE]
[2015-07-29 17:12:58,658][INFO ][cluster.routing.allocation.decider] [Thor] updating [cluster.routing.allocation.disable_allocation] from [false] to [true]

Resources