I am using Elasticsearch version 2.1.0. How can I know the version of Curator being used ?
While changing the settings (number of replicas) I am getting an exception as:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason":Can't update [index.number_of_replicas] on closed indices [[.marvel-es-2016.12.12] - can leave index in an unopenable state"
"status": 400
}
Any clues ?
You can get curator version by using the following command
$ curator --version
I think you are trying to set the replica to a indices which is in closed state.
Try setting replicas after opening the indices.
Related information can be found here
Related
I took an elastic search snapshot (snapshot_1) of 4 indices on an EC2 server "A" and copied the data to another EC2 server "B". I updated the path in elasticsearch.yml and restarted the ES on server B. (I updated the path and restarted ES before putting the data on that path but the path existed and had the required access).
Upon querying the index file in the snapshot directory I do see that snapshot_1 exists.
[elasticsearch#2ed2c2eaa5be uat_dump]$ cat index-0
{"snapshots":[{"name":"snapshot_1","uuid":"bBc6chD0TCKiQvuqn8gsow","state":1}],"indices":{"my_index_1":{"id":"J-c4ZvN0T02HeyQR8ueyZw","snapshots":["bBc6chD0TCKiQvuqn8gsow"]},"my_index_2":{"id":"ifn1Geq2RHe6wAMuGxpAMw","snapshots":["bBc6chD0TCKiQvuqn8gsow"]},"my_index_3":{"id":"X9dPrB3fRd-WrfNnZN69mQ","snapshots":["bBc6chD0TCKiQvuqn8gsow"]},"my_index_4":{"id":"9OjzD37WRROJFkfu-N7LNg","snapshots":["bBc6chD0TCKiQvuqn8gsow"]}}}
But when I am trying to restore the snapshot, I receive the error
{
"error": {
"root_cause": [
{
"type": "snapshot_restore_exception",
"reason": "[my_backup:snapshot_1] snapshot does not exist"
}
],
"type": "snapshot_restore_exception",
"reason": "[my_backup:snapshot_1] snapshot does not exist"
},
"status": 500
}
I do a file incompatible-snapshots and its contents are
{"incompatible-snapshots":[]}
Any pointers what I could be doing wrong?
I have a K8 cluster on GCP running elasticsearch. Now I need to create a backup.
I've installed the GCS-plugin on my pods in stateful-set and tried setting it up with the following documentation:
https://github.com/elastic/elasticsearch/blob/master/docs/plugins/repository-gcs.asciidoc
When I try to configure a repository to use credentials stored in keystore I get the following response back:
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
}
],
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
},
"status": 500
}
Any lead would be helpful, thanks!
I think the problem is that I can't install the plugin on the nodes, so I’ve installed it on the pods instead. And that the installation is not persistent after I restart the pods. So to make the installation persist on K8 I needed to build a custom image that installs the plugin. A bit tricky, but the plugin seems to be intended for GCE. So I decided to move from K8 to a managed instance group on GCE instead.
I'm running elasticsearch 1.7.5 w/ 19 nodes (12 data nodes).
Attempting to setup snapshots for backup and recovery - but am getting a 503 on creation and deletion of a snapshot repository.
curl -XDELETE 'localhost:9200/_snapshot/backups?pretty'
returns:
{
"error" : "RemoteTransportException[[masternodename][inet[/10.0.0.20:9300]][cluster:admin/repository/delete]]; nested: ProcessClusterEventTimeoutException[failed to process cluster event (delete_repository [backups]) within 30s]; ",
"status" : 503
}
I was able to adjust the query w/ a master_timeout=10m - still getting a timeout. Is there a way to debug the cause of this request failing?
Performance on this call seems to be related to pending tasks with a higher priority.
https://discuss.elastic.co/t/timeout-on-deleting-a-snapshot-repository/69936/4
Im using ES 1.4.4 and LS 1.5 and Kibana 4 on Debian.
I start logstash, it works fine for a couple of minutes then i have a fatal error.
In order to shutdown logstash i have to delete the recent datas stored in ES, that's the only way i found.
One more relevant fact is that Elastic Search looks OK, i can see old datas in kibana and plugin head works fine.
My output config : output { elasticsearch {port => 9200 protocol => http host => "127.0.0.1"}}
Any help will be appreciated :)
Here is the full error message :
Got error to send bulk of actions to elasticsearch server at 127.0.0.1 : Read timed out {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1362, :exception=>#, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:61:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:224:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:127:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:50:inperform_request'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:187:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:82:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:413:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:412:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:438:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:436:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1341:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:402:inreceive'", "/opt/logstash/lib/logstash/outputs/base.rb:88:in handle'", "(eval):1070:ininitialize'", "org/jruby/RubyArray.java:1613:in each'", "org/jruby/RubyEnumerable.java:805:inflat_map'", "(eval):1067:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/lib/logstash/pipeline.rb:279:in output'", "/opt/logstash/lib/logstash/pipeline.rb:235:inoutputworker'", "/opt/logstash/lib/logstash/pipeline.rb:163:in `start_outputs'"], :level=>:warn}
Your elasticsearch have surpassed storage and it is unable to write new documents coming from logstash, try deleting old indices and then
PUT your_index/_settings
{
"index": {
"blocks.read_only": false
}
}
I hope this will work for you. Thanks !!
I wanted to try Elasticsearch with Polish language support, but I have some problems with it.
I installed Stempel Analysis Plugin, I'm trying to create an index that uses Polish analyzer:
curl -XPUT localhost:9200/polisz -d '{
"mappings" : {
"_default_" : {
"properties" : {
"text_entry" : { "type": "string", "analyzer": "polish" }
}
}
}
}
'
But I get an error about not recognized analyzer:
{
"status" : 400,
"error" : "MapperParsingException[mapping [_default_]]; nested: MapperParsingException[Analyzer [polish] not found for field [text_entry]]; "
}
Should I do anything after installing the plugin and rebooting ES?
I can't find any specific instructions about using the plugin so maybe I'm just doing something obviously wrong?
Some more details on how I set up my environment:
I installed and run docker image with ES and kibana by commands:
docker pull minimum2scp/es-kibana
docker run -d -p 8080:80 -p 9200:9200 --name es minimum2scp/es-kibana
I installed the Stempel plugin by command:
host$ docker exec -it es bash
root#docker-es:/# /usr/share/elasticsearch/bin/plugin install elasticsearch/elasticsearch-analysis-stempel/2.4.2
Then I rebooted elasticsearch, by:
root#docker-es:/# service elasticsearch restart
I'll be grateful for any help!
Krzysztof
OK, I got it. It seems that my plugin didn't install correctly. Even that plugin install command doesn't return any errors, neither elasticsearch restart command, there was a Lucene version mismatch in Elasticsearch( I don't remember, but below 4.10.2) and the plugin (4.10.3).
It was enough to look into elasticsearch.log file to find it out...My bad.
BUT there is more to it: I switched to the most popular (by stars) elasticsearch docker image, which is: dockerfile/elasticsearch. It has the ES version 1.4.2 that is based on Lucene 4.10.2, still mismatching the plugin Lucene 4.10.3. That causes an error even though authors of plugin states it plugin in 2.4.2 (current stable) support 1.4 ES version(s).
Citing an error for future web searching the problem:
[2015-02-13 10:57:11,850][INFO ][node ] [Necromantra] version[1.4.2], pid[1], build[927caff/2014-12-16T14:11:12Z]
[2015-02-13 10:57:11,851][INFO ][node ] [Necromantra] initializing ...
[2015-02-13 10:57:11,884][ERROR][plugins ] [Necromantra] cannot start plugin due to incorrect Lucene version: plugin [4.10.3], node [4.10.2].
[2015-02-13 10:57:11,884][WARN ][plugins ] [Necromantra] failed to load plugin from [jar:file:/data/plugins/analysis-stempel/elasticsearch-analysi
s-stempel-2.4.2.jar!/es-plugin.properties]
Now I chose a path to downgrade the plugin to 2.4.1, which agreed with my ES 1.4.2. Although in the long term I would look for docker image that has 1.4.3 ES which, hopefully,upgraded Lucene version as well.
Dadoonet, thank you for having a closer look on my problem.