Reindex ElasticSearch index returns "Incorrect HTTP method for uri [/_reindex] and method [GET], allowed: [POST]" - elasticsearch

I'm trying to upgrade an elasticsearch cluster from 1.x to 6.x. I'm reindexing the remote 1.x indices into the 6.x cluster. According to the docs, this is possible:
To upgrade an Elasticsearch 1.x cluster, you have two options:
Perform a full cluster restart upgrade to Elasticsearch 2.4.x and reindex or delete the 1.x indices. Then, perform a full cluster restart upgrade to 5.6 and reindex or delete the 2.x indices. Finally, perform a rolling upgrade to 6.x. For more information about upgrading from 1.x to 2.4, see Upgrading Elasticsearch in the Elasticsearch 2.4 Reference. For more information about upgrading from 2.4 to 5.6, see Upgrading Elasticsearch in the Elasticsearch 5.6 Reference.
Create a new 6.x cluster and reindex from remote to import indices directly from the 1.x cluster.
I'm doing this locally for test purposes, and using the following command with 6.x running:
curl --request POST localhost:9200/_reindex -d #reindex.json
My reindex.json file looks like this:
{
"source": {
"remote": {
"host": "http://localhost:9200"
},
"index": "some_index_name",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "some_index_name"
}
}
However, this returns the following error:
Incorrect HTTP method for uri [/_reindex] and method [GET], allowed: [POST]"
Why is it telling me I can't use GET and to use POST instead? I'm clearly specifying a POST request here, but it seems to think it's a GET request. Any idea why it's getting the wrong request type?

I was facing the same issue, but by adding setting in the PUT request it worked.
PUT /my_blog
{
"settings" : {
"number_of_shards" : 1
},
"mapping": {
"post": {
"properties": {
"user_id": {
"type": "integer"
},
"post_text": {
"type": "string"
},
"post_date": {
"type": "date"
}
}
}
}
}
You can also refer this - https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html

Related

How to migrate index from Old Server to new server of elasticsearch

I have one index in old elasticsearch server in 6.2.0 version (windows server) and now I am trying to move it to new server (Linux) on 7.6.2 version of elasticsearch. I tried below command to migrate my index from old to new server but it is throwing an exception.
POST _reindex
{
"source": {
"remote": {
"host": "http://MyOldDNSName:9200"
},
"index": "test"
},
"dest": {
"index": "test"
}
}
Exception I am getting is -
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
}
],
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
},
"status" : 400
}
Note : I did not created any index in new elastic search server. do I have to create it with my old schema and then try to execute the above command ?
The error message is quite clear that your remote host(windows in your case) from which you are trying to build in a index on your new host(Linux) is not whitelisted, Please refer Elasticsearch guide on how to reindex from remote on more info.
As per same doc
Remote hosts have to be explicitly whitelisted in elasticsearch.yml
using the reindex.remote.whitelist property. It can be set to a
comma delimited list of allowed remote host and port combinations
(e.g. otherhost:9200, another:9200, 127.0.10.:9200, localhost:).
Another useful discuss link to troubleshoot the issue.
https://www.elastic.co/guide/en/elasticsearch/reference/8.0/docs-reindex.html#reindex-from-remote
Add this to elasticsearch.yml, modify it according your environment:
reindex.remote.whitelist: "otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"

Use Elasticsearch Index from newer Version

Is it possible to use (e.g. reindex) an existing index from a newer Elasticsearch version? I tried to do it via the snapshots API, but that fails with:
the snapshot was created with Elasticsearch version [7.5.0] which is higher than the version of this node [7.4.2]
The reason we need to use the newer index is that we want to experiment with a plugin that is not yet available for the new version, but the experiments must be done on data indexed by the newer version.
The snapshot API won't work since you are trying to restore the index on an instance older than the instance that created the index.
You will need to have your index data on a 7.5 instance and use the reindex API on a 7.4.2 instance to reindex from remote
It is something like this:
POST _reindex
{
"source": {
"remote": {
"host": "http://7-5-remote-host:9200"
},
"index": "source"
},
"dest": {
"index": "dest"
}
}
You can also use a logstash pipeline to read from your 7.5 instance and index on your 7.4.2 instance.
Something like this:
input {
elasticsearch {
hosts => "http://7-5-instance:9200"
index => "your-index"
}
}
output {
elasticsearch {
hosts => "http://7-4-instance:9200"
index => "your-index"
}
}

Fielddata is disabled on text fields by default

I've encountered a classic problem, hovewer, no page on SO or any other Q&A or forum has helped me.
I need to extract a numerical value of parameter "wsProcessingElapsedTimeMS" out of string, like (where the parameter is contained in the message field):
2018-07-31 07:37:43,740|DEBUG|[ACTIVE] ExecuteThread: '43' for queue:
'weblogic.kernel.Default (self-tuning)'
|LoggerHandler|logMessage|sessionId=9AWTu
wsOperationName=FindBen wsProcessingEndTime=2018-07-31 07:37:43.738
wsProcessingElapsedTimeMS=6 httpStatus=200 outgoingAddress=172.xxx.xxx.xxx
and keep getting error:
"type":"illegal_argument_exception","reason":"Fielddata is disabled on text
fields by default. Set fielddata=true on [message] in order to load fielddata
in memory by uninverting the inverted index. Note that this can however use
significant memory. Alternatively use a keyword field instead."
Point is, I've already did run the query (by Dev Tools in Kibana GUI if that matters) to mark a field as fielddata in following way:
PUT my_index/_mapping/message
{
"message": {
"properties": {
"publisher": {
"type": "text",
"fielddata": true
}
}
}
}
, which returned brief information:
{
"acknowledged": true
}
After which I've tried to rebuilt the index like:
POST _reindex?wait_for_completion=false
{
"source": {
"index": "my_index"
},
"dest": {
"index": "my_index"
}
}
(the ?wait_for_completion=false flag is set because otherwise it was a timeout; theres a lot of data in the system now).
And finally, having performed above steps, I've also tried to relaunch the kibana and elasticsearch services (processes) to force a reindexing (which tool really long).
Also, using the "message.keyword" instead of "message" (as suggested in the official documentation) is not helping - it's just empty in most of cases.
I'm using the Kibana to access the ElasticSearch engine.
ElasticSearch v. 5.6.3
Kibana v. 5.6.3
Logstash v. 5.5.0
Any suggestion will be appreciated, even regarding the use of additional plugins (provided they have a release compliant with above Kibana/ElasticSearch/Logstash versions, as I can't update them to newer right now).
PUT your_index/_mapping/your_type
{
"your_type": {
"properties": {
"publisher": {
"type": "text",
"fielddata": true
}
}
}
}

How to test analyzer-smartcn plugin for Elastic Search on local machine

I installed the smartcn plugin on my elastic search, restarted elasticsearch and tried to create an index with these settings:
PUT /test_chinese
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"default": {
"type": "smartcn"
}
}
}
}
}
}
However, when I run this in Marvel, I get this error back and I see a bunch of errors in Elastic search:
"error": "IndexCreationException[[test_chinese] failed to create
index]; nested: ElasticsearchIllegalArgumentException[failed to find
analyzer type [smartcn] or tokenizer for [default]]; nested:
NoClassSettingsException[Failed to load class setting [type] with
value [smartcn]]; nested:
ClassNotFoundException[org.elasticsearch.index.analysis.smartcn.SmartcnAnalyzerProvider];
", "status": 400
Any ideas what I might be missing?
I figured it out. I manually installed the plugins from the zip and it was causing issues... I reinstalled the right way but specific to 1.7 and it worked

Change geoip.location mapping from number to geo_point

I am using a logstash filter to convert my filebeat IIS logs into location:
filter {
geoip {
source => "clienthost"
}
}
But the data type in elasticSearch is:
geoip.location.lon = NUMBER
geoip.location.lat = NUMBER
But in order to map points, I need to have
geoip.location = GEO_POINT
Is there a way to change the mapping?
I tried posting a changed mapping
sudo curl -XPUT "http://localhost:9200/_template/filebeat" -d#/etc/filebeat/geoip-mapping-new.json
with a new definition but it's not making a difference:
{
"mappings": {
"geoip": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
},
"template": "filebeat-*"
}
Edit: I've tried this with both ES/Kiabana/Logstash 5.6.3 and 5.5.0
This is not a solution but I deleted all the data and reinstalled ES, Kiabana, Logstash and Filebeat 5.5
And now ES recognizes location as a geopoint - I guess previously even though I had changed the data mapping, there was still data that was mapped incorrectly and Kibana was assuming the incorrect data type - probably a reindex of the complete data would have fixed the problem

Resources