How to migrate index from Old Server to new server of elasticsearch - elasticsearch

I have one index in old elasticsearch server in 6.2.0 version (windows server) and now I am trying to move it to new server (Linux) on 7.6.2 version of elasticsearch. I tried below command to migrate my index from old to new server but it is throwing an exception.
POST _reindex
{
"source": {
"remote": {
"host": "http://MyOldDNSName:9200"
},
"index": "test"
},
"dest": {
"index": "test"
}
}
Exception I am getting is -
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
}
],
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
},
"status" : 400
}
Note : I did not created any index in new elastic search server. do I have to create it with my old schema and then try to execute the above command ?

The error message is quite clear that your remote host(windows in your case) from which you are trying to build in a index on your new host(Linux) is not whitelisted, Please refer Elasticsearch guide on how to reindex from remote on more info.
As per same doc
Remote hosts have to be explicitly whitelisted in elasticsearch.yml
using the reindex.remote.whitelist property. It can be set to a
comma delimited list of allowed remote host and port combinations
(e.g. otherhost:9200, another:9200, 127.0.10.:9200, localhost:).
Another useful discuss link to troubleshoot the issue.

https://www.elastic.co/guide/en/elasticsearch/reference/8.0/docs-reindex.html#reindex-from-remote
Add this to elasticsearch.yml, modify it according your environment:
reindex.remote.whitelist: "otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"

Related

How to migrate elasticsearch indices to data streams

I was asked to migrate to data streams in elasticsearch. I am a newbie in elasticsearch, and still learning about it. Only useful article I could find: https://spinscale.de/posts/2021-07-07-elasticsearch-data-streams-explained.html#data-streams-in-kibana
Since we are using elasticsearch under basic license, I got error when I was following along with tutorial and creating a ILM policy.
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
}
],
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
},
"status" : 400
}
Can anyone give me an idea what else I could do to active data streams in elasticsearch? I can confirm that searchable snapshots are not supported in free license. Is there another way around it?
Thanks in advance!

How to update data type of a field in elasticsearch

I am publishing a data to elasticsearch using fluentd. It has a field Data.CPU which is currently set to string. Index name is health_gateway
I have made some changes in python code which is generating the data so now this field Data.CPU has now become integer. But still elasticsearch is showing it as string. How can I update it data type.
I tried running below commands in kibana dev tools:
PUT health_gateway/doc/_mapping
{
"doc" : {
"properties" : {
"Data.CPU" : {"type" : "integer"}
}
}
}
But it gave me below error:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."
}
],
"type" : "illegal_argument_exception",
"reason" : "Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."
},
"status" : 400
}
There is also this document which says using mutate we can convert the data type but I am not able to understand it properly.
I do not want to delete the index and recreate as I have created a visualization based on this index and after deleting it will also be deleted. Can anyone please help in this.
The short answer is that you can't change the mapping of a field that already exists in a given index, as explained in the official docs.
The specific error you got is because you included /doc/ in your request path (you probably wanted /<index>/_mapping), but fixing this alone won't be sufficient.
Finally, I'm not sure you really have a dot in the field name there. Last I heard it wasn't possible to use dots in field names.
Nevertheless, there are several ways forward in your situation... here are a couple of them:
Use a scripted field
You can add a scripted field to the Kibana index-pattern. It's quick to implement, but has major performance implications. You can read more about them on the Elastic blog here (especially under the heading "Match a number and return that match").
Add a new multi-field
You could add a new multifield. The example below assumes that CPU is a nested field under Data, rather than really being called Data.CPU with a literal .:
PUT health_gateway/_mapping
{
"doc": {
"properties": {
"Data": {
"properties": {
"CPU": {
"type": "keyword",
"fields": {
"int": {
"type": "short"
}
}
}
}
}
}
}
}
Reindex your data within ES
Use the Reindex API. Be sure to set the correct mapping on the target index.
Delete and reindex everything from source
If you are able to regenerate the data from source in a timely manner, without disrupting users, you can simply delete the index and reingest all your data with an updated mapping.
You can update the mapping, by indexing the same field in multiple ways i.e by using multi fields.
Using the below mapping, Data.CPU.raw will be of integer type
{
"mappings": {
"properties": {
"Data": {
"properties": {
"CPU": {
"type": "string",
"fields": {
"raw": {
"type": "integer"
}
}
}
}
}
}
}
}
OR you can create a new index with correct index mapping, and reindex the data in it using the reindex API

Not able to configure Elasticsearch snapshot repository using OCI Amazon S3 Compatibility API

My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"
PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
Respose :
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}
I hope you are aware of when to use S3 Compatible API.
"endpoint":"OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com"
Please modify OCI_TENANCY to TENANCY_NAMESPACE. Please refer to this link for more information.
You can find your tenancy namespace information in Administration -> Tenancy Details page.
Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.
If you look at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI you'll see a mention of:
The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.
AWS is migrating from path based to sub-domain based URLs for S3 (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) so the ES S3 plugin is probably defaulting to doing things the new AWS way.
Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:
{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
What I'm guessing is that having a full URL for the endpoint probably sets the protocol and path_style_access or 6.8 didn't require you to set path_style_access to true but 7.8 might. Either way, try a full URL or setting path_style_access to true. Relevant docs at https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html

Reindex ElasticSearch index returns "Incorrect HTTP method for uri [/_reindex] and method [GET], allowed: [POST]"

I'm trying to upgrade an elasticsearch cluster from 1.x to 6.x. I'm reindexing the remote 1.x indices into the 6.x cluster. According to the docs, this is possible:
To upgrade an Elasticsearch 1.x cluster, you have two options:
Perform a full cluster restart upgrade to Elasticsearch 2.4.x and reindex or delete the 1.x indices. Then, perform a full cluster restart upgrade to 5.6 and reindex or delete the 2.x indices. Finally, perform a rolling upgrade to 6.x. For more information about upgrading from 1.x to 2.4, see Upgrading Elasticsearch in the Elasticsearch 2.4 Reference. For more information about upgrading from 2.4 to 5.6, see Upgrading Elasticsearch in the Elasticsearch 5.6 Reference.
Create a new 6.x cluster and reindex from remote to import indices directly from the 1.x cluster.
I'm doing this locally for test purposes, and using the following command with 6.x running:
curl --request POST localhost:9200/_reindex -d #reindex.json
My reindex.json file looks like this:
{
"source": {
"remote": {
"host": "http://localhost:9200"
},
"index": "some_index_name",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "some_index_name"
}
}
However, this returns the following error:
Incorrect HTTP method for uri [/_reindex] and method [GET], allowed: [POST]"
Why is it telling me I can't use GET and to use POST instead? I'm clearly specifying a POST request here, but it seems to think it's a GET request. Any idea why it's getting the wrong request type?
I was facing the same issue, but by adding setting in the PUT request it worked.
PUT /my_blog
{
"settings" : {
"number_of_shards" : 1
},
"mapping": {
"post": {
"properties": {
"user_id": {
"type": "integer"
},
"post_text": {
"type": "string"
},
"post_date": {
"type": "date"
}
}
}
}
}
You can also refer this - https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html

geo point kibana elasticsearch not showing up on tilemap

I'm trying to view my geojson on any sort of map in Kibana.
My original data is a geo polygon with an array of coordinates.
From my understanding, ElasticSearch/Kibana can't visualize geo shapes, so I'm trying to make a coordinate a geopoint so that I can view it on a tilemap.
Is this possible? I've tried to create a couple different mappings. My most current one won't let me index data to it. Any better approaches? (In an ideal world I could plot a polygon...though I don't think Kibana supports this). I am using version 5.3
Original Data (replacing actual values with long and lat):
{
"geometry": {
"type": "Polygon",
"coordinates": [
[
[long,lat],
[long,lat],
[long,lat],
[long,lat],
[long,lat]
]
]
},
This is the mapping elasticsearch defaults to when I index my json:
{
"indexname" : {
"mappings" : {
"my_type" : {
"properties" : {
"geometry" : {
"properties" : {
"coordinates" : {
"type" : "float"
},
"type" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
},
This is the attempt I just tried to fix the mapping (however, this approach does not take to any indexed data:
curl -XPUT "http://localhost:9200/indexname" -d "{\"mappings\" : {\"my_type\" : {\"properties\" : {\"geometry\" : {\"type\":\"geo_point\"}}}}}"
If I try this command it shows up in kibana, but when I try to run a tilemap, the map just disappears, so I'm assuming it is not getting the correct data:
curl -XPUT "http://localhost:9200/indexname" -d "{\"mappings\" : {\"my_type\" : {\"properties\" : {\"coordinates\" : {\"type\":\"geo_point\"}}}}}"
EDIT
No success. I tried:
kibana-plugin install file:///kibana-5.3.0-windows-x86/kibana-5.3.0-windows-x86/plugins/enhanced-tilemap-v2017-03-17-5.2.2/kibana/enhanced_tilemap
Attempting to transfer from file:///kibana-5.3.0-windows-x86/kibana-5.3.0-windows-x86/plugins/enhanced-tilemap-v2017-03-17-5.2.2/kibana/enhanced_tilemap
Transferring unknown number of bytes
Error: EISDIR: illegal operation on a directory, read
Plugin installation was unsuccessful due to error "EISDIR: illegal operation on a directory, read"
kibana-plugin install http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip
Attempting to transfer from http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip
Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip/http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip-5.3.0.zip
Plugin installation was unsuccessful due to error "No valid url specified."
The next kibana plugin allow you visualize polygons.
If you using kibana 5.3, the plugin installation not support install operation above 5.2
You can do one of to option:
1) open issue in plugin github
2) clone the plugin, extract the zip file to plugin directory vin kibana home,
cd to the plugin that you download and type bower install, restart kibana

Resources