Changing server.maxPayloadBytes on AWS Elasticsearch Service - elasticsearch

So I have a ES cluster hosted through AWS where documnets are mostly of dynamic JSON without fixed mapping.
When I try to the do the "Create index pattern" on Kibana,
it errors out with:
Error Payload content length greater than maximum allowed: 1048576
So I need to either increase server.maxPayloadBytes on kibana.yml (https://www.elastic.co/guide/en/kibana/current/settings.html)
or figure out another way.
To increase server.maxPayloadBytes on kibana.yml, I have to edit it directly but not sure how to do that.
I have VPC endpoint for the cluster but I couldn't ssh into it.
I am running
Elasticsearch version: 6.3

Talked to DevOps, apprently changing server.maxPayloadBytes on AWS Elasticsearch Service is not possible since AWS (as of now) does not allow it

Related

AWS Elasticsearch cluster upgarde from 6.3 to 7

Presently AWS Elasticsearch cluster version is 6.3 and I am planning to upgrade it to 7. reindexing is also have to be done. reindexing is required
to have _doc as type for the indices instead of our custom mapping types.
Below are my queries:
1. What is the end to end process of upgrading AWS ES cluster version.
2. What are the impacts post upgrade.
3. Any specific backup is required?
4. How to perform upgrade in AWS cluster?
5. Post upgrade , Do I need to carry any validtion?
6. when to do reindexing? post cluster upgrade?
What is the end to end process of upgrading AWS ES cluster version.
You can perform an in-place upgrade of an AWS ES cluster from the AWS console. Upgrade triggers a blue green deployment and takes quite a while. For example, We upgraded an ES 6.8 cluster with 4 nodes (10 TB each) to OpenSearch 1.3 recently and it took almost 12 hours to complete.
What are the impacts post upgrade.
By default, AWS migrates all the data and resources (mapping templates, alerts, lifecycle policies etc) into the new upgraded cluster.
If you have some scripts that uses the ES APIs, expect some API paths being changed in the upgraded one. For example, the /_template path in ES 6.8 becomes _index_template in OpenSearch 1.3.
By default, AWS routes all traffic to the new cluster and does not mess around with the ES endpoint. So, if you have some data ingestion pipelines that may use the ES endpoint, it should work automatically. However, I would still recommend you to check the logs of each of your data collectors for any errors.
For example, If you are using kinesis firehose delivery streams, check destination error logs from the AWS console. If you are using logstash or vector, check their logs too.
Any specific backup is required?
It's always a good idea to take periodic snapshots of your AWS ES domain. If something goes wrong, you can always spin up a new domain from a previous working snapshot.
How to perform upgrade in AWS cluster?
Not sure what you mean by this. There's actually no way to manually access the underlying nodes/machines and perform the upgrade yourself. This is because the ES cluster is fully managed by AWS.
Post upgrade , Do I need to carry any validtion?
As mentioned in Question no.2 answer, it's definitely a good idea to check your ingestion pipelines. Check for any warning/errors on the logs. You can also use the Kibana/OpensearchDashboard to visually inspect your data for anything weird.
When to do reindexing? post cluster upgrade?
After you perform the in-place upgrade from AWS console, your existing indices and data are all copied to the newly upgraded cluster.

how can i generate enrollment token for elasticsearch to connect with kibana?

I am having running elastic-search on my Kubernetes cluster with host http://192.168.18.35:31200/. Now I have to connect my elastic search to the kibana. For that an enrollment token needs to be generated but how?
When I login to the root directory of elastic-search from kibana dashboard and type the following command to generate a new enrollment token, it shows the error:
command : bin/elasticsearch-create-enrollment-token --scope kibana
error: bash: bin/elasticsearch-create-enrollment-token: No such file or directory
I have created a file elasticsearch-create-enrollment-token inside the bin directory and gave full permission. Still, no tokens are generated.
Have any ideas on enrollment token guys?
Assuming that you are on debian/ ubuntu, this should help
cd /usr/share/elasticsearch/bin/
then
./elasticsearch-create-enrollment-token --scope kibana
Since you're running ES 7.9, you also need Kibana 7.9. You cannot run Kibana 8 on ES 7.9.
That's the reason why you don't have the elasticsearch-create-enrollment-token script in your bin folder, since that's new in ES8
The enrollment flow for configuration is available in version 8.0 and onwards only and is designed to work only with the TLS configuration that is generated automatically on the first start of the node.
You can still use the documentation to setup TLS manually and configure Kibana to connect to your elasticsearch cluster as you would do in previous versions, this is always supported too.
I’d strongly suggest that you look into using ECK and take advantage of the documentation available.

Running two different ES on a same machine and configuring kibana accordingly

I have installed two ES on my machine. One is 5 version(localhost:9200) and the other is 6 version(localhost:9500). Version 5 is used to index and store data alone while Version 6 is used to do some analytics using kibana dashboards.When i start kibana, the kibana automatically stops stating that all the ES should be on the same version. Is there any way, i can stop kibana from reading localhost:9200 ?
like #Abhijit Bashetti stated in the comment, you need to modify kibana.yml file in order to point kibana to the elasticsearch instance you wish.
you should change "localhost:9200" to "localhost:9500" in order for kibana to reach the ES v6.

How to set Elasticsearch 6.x password without using X-Pack

We are using Elasticsearch in a Kubernetes cluster (not exposed publicly) without X-Pack security, and had it working in 5.x with elastic/changeme, but after trying to get it set up with 6.x, it's now requiring a password, and the default of elastic/changeme no longer works.
We didn't explicitly configure it to require authentication, since it's not publicly exposed and only accessible internally, so not sure why it's requiring the password, or more importantly, how we can find out what it is or how to set/change it without using X-Pack security.
Will we end up needing to subscribe to X-Pack since we're trying to us it within a Kubernetes cluster?
Not sure how you are deploying Elasticseach in Kubernetes but we had a similar issue an ended passing this:
xpack.security.enabled=false
through the environment to the container.
If you don't use XPack at all you should use oss flavor of Elasticsearch. It includes only open source components of Elasticsearch:
docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.2
The interesting thins is, Elastic have removed any mention of it in documentation since 6.3.
See:
Docker 6.2
Docker current

How to reset replication stream between couchbase and elasticsearch

I have a couchbase cluster setup as the primary source for data. From this a subset of data is synced to a elasticsearch cluster via the Couchbase Transport Plugin for ElasticSearch(https://github.com/couchbaselabs/elasticsearch-transport-couchbase) which sets up an XDCR stream from couchbase to elasticsearch.
Due to some issues with the elasticsearch cluster all data needs to be synced again from couchbase to elasticsearch. I have tried recreating XDCR but that does not seem to help as it only copies a very small subset of documents. Is there a way by which this can be achieved?
Additional details
Couchbase version: 3.1.0
Number of couchbase documents: 50K+
Documents synced to elasticsearch: around 700 (expected 20K+)
If a document in couchbase is modified it is successfully synced to elasticsearch
The issue you're experiencing is likely in one of the following: XDCR, the Couchbase Transport Plugin for Elasticsearch, or Elasticsearch itself.
Start by checking for XDCR errors. You can find your XDCR logs using these instructions. Be aware that the Transport Plugin uses XDCR v1 and almost everything else in Couchbase uses v2.
Consult the advice in troubleshooting the Couchbase Transport Plugin for Elasticsearch. Instructions should work for you even though they are from the 4.0 docs.
Pay attention to how your documents are being mapped to Elasticsearch. You mention that you're expecting only a subset of documents to be synced to Elasticsearch, so it's possible that you have lost a setting or misconfigured something. You can enable logging and observe a small set of test data. At TRACE level, you should be able to see each document that is inspected.
If all of that fails, make sure the basics are working by indexing the beer sample dataset, following the directions in the Couchbase docs. ES is probably not the issue, but test with a fresh ES instance will rule out problems on that side.

Resources