Not able to configure Elasticsearch snapshot repository using OCI Amazon S3 Compatibility API - elasticsearch

My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"
PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
Respose :
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}

I hope you are aware of when to use S3 Compatible API.
"endpoint":"OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com"
Please modify OCI_TENANCY to TENANCY_NAMESPACE. Please refer to this link for more information.
You can find your tenancy namespace information in Administration -> Tenancy Details page.

Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.
If you look at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI you'll see a mention of:
The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.
AWS is migrating from path based to sub-domain based URLs for S3 (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) so the ES S3 plugin is probably defaulting to doing things the new AWS way.
Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:
{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
What I'm guessing is that having a full URL for the endpoint probably sets the protocol and path_style_access or 6.8 didn't require you to set path_style_access to true but 7.8 might. Either way, try a full URL or setting path_style_access to true. Relevant docs at https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html

Related

How to migrate elasticsearch indices to data streams

I was asked to migrate to data streams in elasticsearch. I am a newbie in elasticsearch, and still learning about it. Only useful article I could find: https://spinscale.de/posts/2021-07-07-elasticsearch-data-streams-explained.html#data-streams-in-kibana
Since we are using elasticsearch under basic license, I got error when I was following along with tutorial and creating a ILM policy.
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
}
],
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
},
"status" : 400
}
Can anyone give me an idea what else I could do to active data streams in elasticsearch? I can confirm that searchable snapshots are not supported in free license. Is there another way around it?
Thanks in advance!

How to migrate index from Old Server to new server of elasticsearch

I have one index in old elasticsearch server in 6.2.0 version (windows server) and now I am trying to move it to new server (Linux) on 7.6.2 version of elasticsearch. I tried below command to migrate my index from old to new server but it is throwing an exception.
POST _reindex
{
"source": {
"remote": {
"host": "http://MyOldDNSName:9200"
},
"index": "test"
},
"dest": {
"index": "test"
}
}
Exception I am getting is -
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
}
],
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
},
"status" : 400
}
Note : I did not created any index in new elastic search server. do I have to create it with my old schema and then try to execute the above command ?
The error message is quite clear that your remote host(windows in your case) from which you are trying to build in a index on your new host(Linux) is not whitelisted, Please refer Elasticsearch guide on how to reindex from remote on more info.
As per same doc
Remote hosts have to be explicitly whitelisted in elasticsearch.yml
using the reindex.remote.whitelist property. It can be set to a
comma delimited list of allowed remote host and port combinations
(e.g. otherhost:9200, another:9200, 127.0.10.:9200, localhost:).
Another useful discuss link to troubleshoot the issue.
https://www.elastic.co/guide/en/elasticsearch/reference/8.0/docs-reindex.html#reindex-from-remote
Add this to elasticsearch.yml, modify it according your environment:
reindex.remote.whitelist: "otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"

How to create elasticsearch watcher by xpack

I just tried working with elasticsearch and now trying to create first watcher
There are some information I have read in elasticsearch documentation : https://www.elastic.co/guide/en/x-pack/current/watcher-getting-started.html
And now I trty to create one :
https://es.origin-test.cloud.rccf.ru/apiconnect508/_xpack/watcher/watch/audit_watch
PUT method + auth headers
I put in :
{ "trigger" : {
"schedule": {
"interval": "1h"
}
}, "actions" : { "send_email" : {
"email" : {
"to" : "ext_avolkova#rencredit.ru",
"subject" : "Watcher Notification",
"body" : "{{ctx.payload.hits.total}} logs found"
} } } }
But now I see mistake :
No handler found for uri
[/apiconnect508/_xpack/watcher/watch/log_audit] and method [PUT]
Please, help me to create one simple watcher
Based on the support matrix, elasticsearch 2.x is not compatible with x-pack.
You might want to install Watcher as a separate plugin using this document.

Cant connect to my proxied elasticsearch node

I'm having issues with connecting from my Go client to my es node.
I have elasticsearch behind an nginx proxy that sets basic auth.
All settings are default in ES besides memory.
Via browser it works wonderfully, but not via this client:
https://github.com/olivere/elastic
I read the docs and it says it uses the /_nodes/http api to connect. Now this is probably where I did something wrong because the response from that api looks like this:
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"nodes" : {
"u6TqFjAvRBa3_4FndfKh4w" : {
"name" : "u6TqFjA",
"transport_address" : "127.0.0.1:9300",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "5.6.2",
"build_hash" : "57e20f3",
"roles" : [
"master",
"data",
"ingest"
],
"http" : {
"bound_address" : [
"[::1]:9200",
"127.0.0.1:9200"
],
"publish_address" : "127.0.0.1:9200",
"max_content_length_in_bytes" : 104857600
}
}
}
}
I'm guessing I have to set the IPs to my actual IP/domain (my domain is like es01.somedomain.com)
So how do i correctly configure elastisearch so that my go client can connect?
My config files for nginx look similar to this: https://www.elastic.co/blog/playing-http-tricks-nginx
Edit: I found a temporary solution by setting elastic.SetSniff(false) in the Options for the client, but I think that means I can't scale ES horizontally. So still looking for an alternative.
You are looking for the HTTP options, specifically http.publish_host and http.publish_port, which should be set to the publicly reachable address and port of the Nginx server proxying the ES node.
Note that with Elasticsearch listening on 127.0.0.1:9300 for the transport, you won't be able to form a cluster with nodes on other hosts. The transport can be configured similarly with the transport options.

geo point kibana elasticsearch not showing up on tilemap

I'm trying to view my geojson on any sort of map in Kibana.
My original data is a geo polygon with an array of coordinates.
From my understanding, ElasticSearch/Kibana can't visualize geo shapes, so I'm trying to make a coordinate a geopoint so that I can view it on a tilemap.
Is this possible? I've tried to create a couple different mappings. My most current one won't let me index data to it. Any better approaches? (In an ideal world I could plot a polygon...though I don't think Kibana supports this). I am using version 5.3
Original Data (replacing actual values with long and lat):
{
"geometry": {
"type": "Polygon",
"coordinates": [
[
[long,lat],
[long,lat],
[long,lat],
[long,lat],
[long,lat]
]
]
},
This is the mapping elasticsearch defaults to when I index my json:
{
"indexname" : {
"mappings" : {
"my_type" : {
"properties" : {
"geometry" : {
"properties" : {
"coordinates" : {
"type" : "float"
},
"type" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
},
This is the attempt I just tried to fix the mapping (however, this approach does not take to any indexed data:
curl -XPUT "http://localhost:9200/indexname" -d "{\"mappings\" : {\"my_type\" : {\"properties\" : {\"geometry\" : {\"type\":\"geo_point\"}}}}}"
If I try this command it shows up in kibana, but when I try to run a tilemap, the map just disappears, so I'm assuming it is not getting the correct data:
curl -XPUT "http://localhost:9200/indexname" -d "{\"mappings\" : {\"my_type\" : {\"properties\" : {\"coordinates\" : {\"type\":\"geo_point\"}}}}}"
EDIT
No success. I tried:
kibana-plugin install file:///kibana-5.3.0-windows-x86/kibana-5.3.0-windows-x86/plugins/enhanced-tilemap-v2017-03-17-5.2.2/kibana/enhanced_tilemap
Attempting to transfer from file:///kibana-5.3.0-windows-x86/kibana-5.3.0-windows-x86/plugins/enhanced-tilemap-v2017-03-17-5.2.2/kibana/enhanced_tilemap
Transferring unknown number of bytes
Error: EISDIR: illegal operation on a directory, read
Plugin installation was unsuccessful due to error "EISDIR: illegal operation on a directory, read"
kibana-plugin install http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip
Attempting to transfer from http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip
Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip/http://artifacts.elastic.co/downloads/kibana-plugins/enhanced_tilemap/enhanced_tilemap-5.2.2.zip-5.3.0.zip
Plugin installation was unsuccessful due to error "No valid url specified."
The next kibana plugin allow you visualize polygons.
If you using kibana 5.3, the plugin installation not support install operation above 5.2
You can do one of to option:
1) open issue in plugin github
2) clone the plugin, extract the zip file to plugin directory vin kibana home,
cd to the plugin that you download and type bower install, restart kibana

Resources