elasticsearch reindex error - reindex from remote - elasticsearch

I am trying to reindex from elasticsearch 1.0 to elasticsearch 5.0 directly by using reindex from remote option
Both the versions are installed in the remote system and running on port number 9200 and 9201 respectively.
I have followed the steps of indexing from remote. 1st I have created snapshot of the data in elasticsearch 1.0. The mapping for the data is created in elasticsearch 5.0 with a new index name. but whenever I try to post the json document using the curl command :
curl -XPOST "localhost:9201/_reindex" -d #reindex.json
{
"source": {
"remote": {
"host": "localhost:9200",
"index" : "customer"
}
},
"dest": {
"index": "new_customer"
}
}
I am getting an error like this.
Please help me resolve the issue

Please copy & paste the error messages instead of creating a screenshot in the future.
Your screenshot shows, that Elasticsearch actually returns a useful error message: you did not specify a scheme for the hostname. A scheme in this example means you have to specify http or https as part of the hostname.

Answering because i lack the reputation to comment.
Probably the following isn't the cause of your error but it'll help you once you get ahead of it.
A snippet from ES documentation:
A snapshot of an index created in 2.x can be restored to 5.x.
A snapshot of an index created in 1.x can be restored to 2.x.
A snapshot of an index created in 1.x can not be restored to 5.x.
To restore a snapshot of an index created in 1.x to 5.x you can restore it to a 2.x cluster and use reindex-from-remote to rebuild the index in a 5.x cluster.
Link to documentation

Related

Delete all elasticsearch indices directly without curl

I am starting elasticsearch, and getting the error:
java.lang.IllegalStateException: unable to upgrade the mappings for the index [[documents/xOOEXQB-RzGhQp7o7NNH9w]]
at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:172) ~[elasticsearch-5.5.0.jar:5.5.0]
I am not exactly sure what caused this to happen. I did do a
brew upgrade elasticsearch but I didn't note down the last version. I am currently on elasticsearch 5.5.
I would like to just clear away all the mappings/indices for elasticsearch. I don't need these data as it is for testing. Most of the documentation says to use
curl -XDELETE 'http://localhost:9200/_all'
However, localhost:9200 isn't reachable (it was previously), presumably because elasticsearch cannot be properly started so it is a bit of chicken and egg.
Is there a way for me to clear away all elasticsearch data manually?
You probably have some remaining indices that are incompatible with your newest ES version, most certainly you were on ES 1.x before.
You can simply delete anything under the $ES_HOME/data/* folder. Since you installed ES via brew, ES_HOME is usually located at /usr/local/Cellar/elasticsearch

Modifying default elasticsearch template in logstash 5.x

I've set up an Elastic Stack 5.3 to aggregate logs from a bunch of servers, with Filebeat in each of the servers scraping the logs and sending them to a centralised Logstash, Elasticsearch and Kibana.
I've set up my Logstash configuration to extract some custom string fields but I wish to change the index template to change their type from "text" to "keyword". I've found the configuration directives to specify my own template, but where can I find Logstash's default template so I can use it as a starting point? I've searched under /etc/logstash and /usr/share/logstash (I've installed a vanilla Logstash 5.3 RPM on RHEL 7) but couldn't find anything.
Any good example of how to create a non-standard index template on logstash 5.x would be really handy; most of the examples I have found predate Beats and the new string types in 5.x. The documentation leaves something to be desired.
The default elasticsearch index template can be found in the logstash-output-elasticsearch plugin repository at https://github.com/logstash-plugins/logstash-output-elasticsearch/tree/master/lib/logstash/outputs/elasticsearch
You'll find different templates in there, for ES 2.x, 5.x and 6.x, the one you're looking for is probably the 5.x one.

Elasticsearch is still initializing the kibana index

When I am trying to start Kibana I am facing the following issue. I first restarted my elasticsearch server it was running successfully. After starting Elasticsearch I tried to start Kibana but no luck.
{"name":"Kibana","hostname":"ABCD","pid":3848,"level":30,"msg":"Elasticsearch is still initializing the kibana index... Trying again in 2.5 second.","time":"2015-07-03T07:35:34.936Z","v":0}
Thanks in advance
the curl -XDELETE http://localhost:9200/.kibana command works fine, however you lose all your Kibana settings (indexes, graphs, dashboards); by just querying the index I've solved the problem, without losing my data. For example:
curl -s http://localhost:9200/.kibana/_recovery?pretty
curl -XPUT 'localhost:9200/.kibana/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
Then start Kibana, it should work.
Gael Le Moellic
Warning: Removing .kibana index will make you lose all your kibana settings (indexes, graphs, dashboards)
This behavior is sometimes caused by an existing .kibana index.
Kindly delete the .kibana index in elasticsearch using following command:
curl -XDELETE http://localhost:9200/.kibana
After deleting the index, restart Kibana.
If the problem still persists, and you are willing to lose any existing data, you can try deleting all indexes using following command:
curl -XDELETE http://localhost:9200/*
Followed by restarting Kibana.
Note: localhost:9200 is the elasticsearch server's host:port, which may be different in your case.
Sometimes you need to wait few minutes after restarting ES.
That can be also connected with low disk space.
Observed on AWS t2.small machine with ELK stack.
Something is wrong with your Kibana index inside elasticsearch.
I had the same message and i just deleted my Kibana index and then,
when i restarted it, a new index of Kibana was created by the service.

Couchbase replication issue with Elasticsearch

I'm having an issue with my Elasticsearch replication of my Couchbase DB.
Certain documents in my DB are not being replicated correctly. Rather than replicating as full couchbase documents, there is an entry with _type of couchbaseCheckpoint pointing to certain documents. These checkpoints never seem to update or grab the documents correctly, even after a refresh. This seems to occur at random, with some documents replicating correctly and being stored as full couchbaseDocuments, while others stay as these checkpoints. I'm not seeing any errors related to this in my logs at all.
versions:
couchbase 3.0.1
elasticsearch 1.3
couchbase elasticsearch plugin 2.0.0
{
"name":"xxx",
"age": "1",
}

Running Kibana3, LogStash and ElasticSearch, all in one machine

Kibana3 works successfully when ElasticSearch is in a different machine, by setting elasticsearch: "http://different_machine_ip:9200" in config.js of Kibana3.
Now , I want to run all three of them in my local machine for testing. I'm using Windows7 and using Chrome browser. I installed Kibana 3 on Tomcat7. I started the embedded ElasticSearch from LogStash jar file.
I set the ElasticSearch location to "localhost:9200" or "127.0.0.1:9200" or "computer_name:9200". When I check Kibana3 on the browser, the ElasticSearch query revealed via spying has no logstash index.
curl -XGET 'http://localhost:9200//_search?pretty' -d ''
As you can see, the index part is empty, showing // only. The expected query should look like this.
curl -XGET 'http://localhost:9200/logstash-2013.08.13/_search?pretty' -d 'Some JSON Data'
The browser is able to call ElasticSearch API successfully. Example, typing http://localhost:9200/logstash-2013.08.13/_mapping?pretty=true on the address bar returns the mapping of logstash index. This proves that there is no probelm in connecting to ElasticSearch.
The problem here is that the index is empty from Kibana query. Why is the index empty?
Kibana 3 works differently from Kibana 1 and 2. It runs entirely in the browser.
The config file is read by javascript and executed in your browser, so localhost:9200 tells Kibana to look for ElasticSearch running on the laptop in front of you, not the server.
BTW - Recent versions of LogStash has Kibana bundled, so you don't have to host it independently.

Resources