How to migrate elasticsearch indices to data streams - elasticsearch

I was asked to migrate to data streams in elasticsearch. I am a newbie in elasticsearch, and still learning about it. Only useful article I could find: https://spinscale.de/posts/2021-07-07-elasticsearch-data-streams-explained.html#data-streams-in-kibana
Since we are using elasticsearch under basic license, I got error when I was following along with tutorial and creating a ILM policy.
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
}
],
"type" : "illegal_argument_exception",
"reason" : "policy [csc-stream-policy] defines the [searchable_snapshot] action but the current license is non-compliant for [searchable-snapshots]"
},
"status" : 400
}
Can anyone give me an idea what else I could do to active data streams in elasticsearch? I can confirm that searchable snapshots are not supported in free license. Is there another way around it?
Thanks in advance!

Related

illegal_argument_exception when creating an index alias - Elastic Cloud

I am trying to create an alias with a filter of an index pattern metrics-* . I was able to do it yesterday and the day before without any problems but I can't do it again today, even if I re-run the same queries as yesterday. I have no problem creating an alias of logs-* . But when I try to create a metrics-* alias, I get an HTTP 400 code with this as response:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "expressions [metrics-system.filesystem-default, metrics-system.cpu-default, metrics-endpoint.policy-default, metrics-endpoint.metrics-default, metrics-windows.perfmon-default, metrics-azure.compute_vm-default, metrics-system.process.summary-default, metrics-elastic_agent.endpoint_security-default, metrics-endpoint.metadata-default, metrics-endpoint.metadata_current_default, metrics-azure.storage_account-default, metrics-system.memory-default, metrics-system.uptime-default, metrics-elastic_agent.elastic_agent-default, metrics-windows.service-default, metrics-elastic_agent.metricbeat-default, metrics-system.fsstat-default, metrics-system.process-default, metrics-elastic_agent.filebeat-default, metrics-system.network-default, metrics-system.diskio-default, metrics-system.load-default, metrics-system.socket_summary-default] that match with both data streams and regular indices are disallowed"
}
],
"type" : "illegal_argument_exception",
"reason" : "expressions [metrics-system.filesystem-default, metrics-system.cpu-default, metrics-endpoint.policy-default, metrics-endpoint.metrics-default, metrics-windows.perfmon-default, metrics-azure.compute_vm-default, metrics-system.process.summary-default, metrics-elastic_agent.endpoint_security-default, metrics-endpoint.metadata-default, metrics-endpoint.metadata_current_default, metrics-azure.storage_account-default, metrics-system.memory-default, metrics-system.uptime-default, metrics-elastic_agent.elastic_agent-default, metrics-windows.service-default, metrics-elastic_agent.metricbeat-default, metrics-system.fsstat-default, metrics-system.process-default, metrics-elastic_agent.filebeat-default, metrics-system.network-default, metrics-system.diskio-default, metrics-system.load-default, metrics-system.socket_summary-default] that match with both data streams and regular indices are disallowed"
},
"status" : 400
}
Here is the request body :
PUT metrics-*/_alias/perso-metrics
{
"filter": {
"term": {
"agent.name" : "minecraft-server"
}
}
}
Thanks in advance for your help
Looks like some of your indices which are starting with name metrics is not data-streams and are regular indices, in Alias request you can't have both of them, if you try to create aliases separately for data-stream and regular indices it will work.

Not able to configure Elasticsearch snapshot repository using OCI Amazon S3 Compatibility API

My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"
PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
Respose :
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}
I hope you are aware of when to use S3 Compatible API.
"endpoint":"OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com"
Please modify OCI_TENANCY to TENANCY_NAMESPACE. Please refer to this link for more information.
You can find your tenancy namespace information in Administration -> Tenancy Details page.
Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.
If you look at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI you'll see a mention of:
The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.
AWS is migrating from path based to sub-domain based URLs for S3 (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) so the ES S3 plugin is probably defaulting to doing things the new AWS way.
Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:
{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
What I'm guessing is that having a full URL for the endpoint probably sets the protocol and path_style_access or 6.8 didn't require you to set path_style_access to true but 7.8 might. Either way, try a full URL or setting path_style_access to true. Relevant docs at https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html

Elastic Search scan operation not working

I'm performing some operation in Dataflow and putting document in ElasticSearch index.While trying to fetch doc from Kibana, I'm not able to fetch more than 10 records at a time. So I have used scan operation and also provide the size in url, now I'm getting scan operation not supported error.
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "No search type for [scan]"
}
],
"type" : "illegal_argument_exception",
"reason" : "No search type for [scan]"
},
So is there any way to get more than 10 docs from Kibana at the same time. So I'm using Kibana 7.7.0 management. Thanks in Advance.
search_type=scan was supported til Elasticsearch v2.1, and then removed.
Probably you're using something higher than ES 2.1.
https://www.elastic.co/guide/en/elasticsearch/reference/2.1/search-request-search-type.html

How to migrate index from Old Server to new server of elasticsearch

I have one index in old elasticsearch server in 6.2.0 version (windows server) and now I am trying to move it to new server (Linux) on 7.6.2 version of elasticsearch. I tried below command to migrate my index from old to new server but it is throwing an exception.
POST _reindex
{
"source": {
"remote": {
"host": "http://MyOldDNSName:9200"
},
"index": "test"
},
"dest": {
"index": "test"
}
}
Exception I am getting is -
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
}
],
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
},
"status" : 400
}
Note : I did not created any index in new elastic search server. do I have to create it with my old schema and then try to execute the above command ?
The error message is quite clear that your remote host(windows in your case) from which you are trying to build in a index on your new host(Linux) is not whitelisted, Please refer Elasticsearch guide on how to reindex from remote on more info.
As per same doc
Remote hosts have to be explicitly whitelisted in elasticsearch.yml
using the reindex.remote.whitelist property. It can be set to a
comma delimited list of allowed remote host and port combinations
(e.g. otherhost:9200, another:9200, 127.0.10.:9200, localhost:).
Another useful discuss link to troubleshoot the issue.
https://www.elastic.co/guide/en/elasticsearch/reference/8.0/docs-reindex.html#reindex-from-remote
Add this to elasticsearch.yml, modify it according your environment:
reindex.remote.whitelist: "otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"

ElasticCloud : how to configure [search.max_buckets] cluster level setting?

We were using ElasticSearch 6.X deployed on my own server.
We migrate recently in the cloud. So the version used is 7.X.
We have a huge query with aggregates that was working on 6.X but this query is not working anymore.
This is due to a Breaking changes between version.
https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-7.0.html#breaking_70_aggregations_changes
search.max_buckets in the cluster settingedit
The dynamic cluster setting named search.max_buckets now defaults to 10,000 (instead of unlimited in the previous version). Requests that try to return more than the limit will fail with an exception.
So when, we execute the query with aggregates, we have the exception:
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [
{
"shard" : 0,
"index" : "xxxxxxx",
"node" : "xxxxxxxxxxxxxxxx",
"reason" : {
"type" : "too_many_buckets_exception",
"reason" : "Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.",
"max_buckets" : 10000
}
}
We don't have time to change query so how can we configure the parameter on ElasticCloud?
Or can I add a parameter to the query?
Thanks for your help.
I found the answer on the ElasticSearch website:
https://discuss.elastic.co/t/increasing-max-buckets-for-specific-visualizations/187390

Resources