Kubernetes cAdvisor cannot find elasticsearch node - elasticsearch

cAdvisor v0.29.0
k8s v1.9
es v6.1.2
ELK in k8s works as expected. cAdvisor also works, but fails to find ES:
Added container args:
"-storage_driver=elasticsearch",
"-storage_driver_es_host='http://elasticsearch:9200'"
Error: Failed to initialize storage driver: failed to create the elasticsearch client - no Elasticsearch node available

I had this same issue in Swarm and is not related to Kubernetes as far as I can tell. The main issue is that cAdvisor v0.29 does not contain storage drivers for Elasticsearch version 6. The version of cAdvisor you are using only includes client drivers for elasticsearch version 2 specified (on line 27) in the source here. So the error message "Failed to initialize storage driver" is saying that you are not able to connect to that ES instance because cAdvisor does not have the proper drivers for that version of Elasticsearch.
There is a GitHub issue for cAdvisor that would add drivers for Elasticsearch 5 (but not necessarily 6), but the change hasn't been merged into master branch yet.

Related

AWS Elasticsearch cluster upgarde from 6.3 to 7

Presently AWS Elasticsearch cluster version is 6.3 and I am planning to upgrade it to 7. reindexing is also have to be done. reindexing is required
to have _doc as type for the indices instead of our custom mapping types.
Below are my queries:
1. What is the end to end process of upgrading AWS ES cluster version.
2. What are the impacts post upgrade.
3. Any specific backup is required?
4. How to perform upgrade in AWS cluster?
5. Post upgrade , Do I need to carry any validtion?
6. when to do reindexing? post cluster upgrade?
What is the end to end process of upgrading AWS ES cluster version.
You can perform an in-place upgrade of an AWS ES cluster from the AWS console. Upgrade triggers a blue green deployment and takes quite a while. For example, We upgraded an ES 6.8 cluster with 4 nodes (10 TB each) to OpenSearch 1.3 recently and it took almost 12 hours to complete.
What are the impacts post upgrade.
By default, AWS migrates all the data and resources (mapping templates, alerts, lifecycle policies etc) into the new upgraded cluster.
If you have some scripts that uses the ES APIs, expect some API paths being changed in the upgraded one. For example, the /_template path in ES 6.8 becomes _index_template in OpenSearch 1.3.
By default, AWS routes all traffic to the new cluster and does not mess around with the ES endpoint. So, if you have some data ingestion pipelines that may use the ES endpoint, it should work automatically. However, I would still recommend you to check the logs of each of your data collectors for any errors.
For example, If you are using kinesis firehose delivery streams, check destination error logs from the AWS console. If you are using logstash or vector, check their logs too.
Any specific backup is required?
It's always a good idea to take periodic snapshots of your AWS ES domain. If something goes wrong, you can always spin up a new domain from a previous working snapshot.
How to perform upgrade in AWS cluster?
Not sure what you mean by this. There's actually no way to manually access the underlying nodes/machines and perform the upgrade yourself. This is because the ES cluster is fully managed by AWS.
Post upgrade , Do I need to carry any validtion?
As mentioned in Question no.2 answer, it's definitely a good idea to check your ingestion pipelines. Check for any warning/errors on the logs. You can also use the Kibana/OpensearchDashboard to visually inspect your data for anything weird.
When to do reindexing? post cluster upgrade?
After you perform the in-place upgrade from AWS console, your existing indices and data are all copied to the newly upgraded cluster.

How to update Elasticsearch ECS in Kubernetes?

I use ECS (Elastic Cloud on Kubernetes) with Azure Kubernetes service.
ECS version 1.2.1
One Elasticsearch node (in a single pod) + one Kibana node.
I need to update Elasticsearch version from 7.9 to 7.10.
I have updated Elasticsearch version in yml file and run the command:
kubectl apply -f elasticsearch.yaml
But it was not updated. Still the old Elasticsearch is running in the same pod.
How to update Elasticsearch?
Will the data be lost?
Problem is solved.
I have added one extra VM to the k8s cluster and operator upgraded the Elasticsearch.
It looks like there where not enough resources in the cluster to run the update.
I have added one extra Elasticsearch pod as well. Perhaps upgrade is just not working with a single Elasticsearch pod.

metricbeat agent running on ELK cluster?

Does metricbeat need always an agent running separately from the ELK cluster or it provides a plugin/agent/approach to run metricbeat on the cluster side?
If I understand your question, you want to know if their is a way to monitor your cluster without installing a beat.
You can enable monitoring in the stack monitoring tab of Kibana.
If you want more, beats are standalone objects pluggables with logstash or Elasticsearch.
Latest versions of Elastic Stack (formally known as ELK ) offer more centralized configurations in Kibana, and the 7.9 version introduce a unified elastic agent in Beta to gather several beats in one and manage you "fleet" on agent within Kibana.
But information used by your beats are not directly part of Elastic (CPU, RAM, Logs, etc...)
So you'll still have to install a daemon on your system.

How to load data from Cassandra into ELK

I have installed Cassandra 3.11.3 in my ubuntu virtual machine. I have also installed the ELK(elasticsearch, logstash, kibana).
What is the way using which I can visualize the Cassandra data into Kibana using the ELK. Please let me know the detail configurations that i will need to do in order to get data from Cassandra database into the Kibana dashboard.
I did the similar thing using Kafka, where I used below structure:
Cassandra -> Confluent Kafka -> Elastic search.
Its pretty easy to do as connectors are provided by Confluent.
But if you only need to visualize the data , you can try Banana which gels well with Cassandra.
Note: Banana is forked version of Kibana.
Se

How to reset replication stream between couchbase and elasticsearch

I have a couchbase cluster setup as the primary source for data. From this a subset of data is synced to a elasticsearch cluster via the Couchbase Transport Plugin for ElasticSearch(https://github.com/couchbaselabs/elasticsearch-transport-couchbase) which sets up an XDCR stream from couchbase to elasticsearch.
Due to some issues with the elasticsearch cluster all data needs to be synced again from couchbase to elasticsearch. I have tried recreating XDCR but that does not seem to help as it only copies a very small subset of documents. Is there a way by which this can be achieved?
Additional details
Couchbase version: 3.1.0
Number of couchbase documents: 50K+
Documents synced to elasticsearch: around 700 (expected 20K+)
If a document in couchbase is modified it is successfully synced to elasticsearch
The issue you're experiencing is likely in one of the following: XDCR, the Couchbase Transport Plugin for Elasticsearch, or Elasticsearch itself.
Start by checking for XDCR errors. You can find your XDCR logs using these instructions. Be aware that the Transport Plugin uses XDCR v1 and almost everything else in Couchbase uses v2.
Consult the advice in troubleshooting the Couchbase Transport Plugin for Elasticsearch. Instructions should work for you even though they are from the 4.0 docs.
Pay attention to how your documents are being mapped to Elasticsearch. You mention that you're expecting only a subset of documents to be synced to Elasticsearch, so it's possible that you have lost a setting or misconfigured something. You can enable logging and observe a small set of test data. At TRACE level, you should be able to see each document that is inspected.
If all of that fails, make sure the basics are working by indexing the beer sample dataset, following the directions in the Couchbase docs. ES is probably not the issue, but test with a fresh ES instance will rule out problems on that side.

Resources