I use ECS (Elastic Cloud on Kubernetes) with Azure Kubernetes service.
ECS version 1.2.1
One Elasticsearch node (in a single pod) + one Kibana node.
I need to update Elasticsearch version from 7.9 to 7.10.
I have updated Elasticsearch version in yml file and run the command:
kubectl apply -f elasticsearch.yaml
But it was not updated. Still the old Elasticsearch is running in the same pod.
How to update Elasticsearch?
Will the data be lost?
Problem is solved.
I have added one extra VM to the k8s cluster and operator upgraded the Elasticsearch.
It looks like there where not enough resources in the cluster to run the update.
I have added one extra Elasticsearch pod as well. Perhaps upgrade is just not working with a single Elasticsearch pod.
Related
I have configured a small instance of google kubernetes cluster with one node. I want to deploy elasticsearch service in this cluster. How do I set up that? I need the necessary steps.
In Google Cloud Marketplacep there are different categorie of elasticsearch if you want container images:
You need just to use the gcloud pull command
Elasticsearch 5:
gcloud auth configure-docker && docker pull marketplace.gcr.io/google/elasticsearch5:latest
Elasticsearch 6:
gcloud auth configure-docker && docker pull marketplace.gcr.io/google/elasticsearch6:latest
For kubernetes app (like depluing directly to your cluster)
As well you can deply using HELM as suggested by #Luiz
You can use the elasticsearch helm chart and manually tune it's resources limits and requests to fit into one node.
So I have a ES cluster hosted through AWS where documnets are mostly of dynamic JSON without fixed mapping.
When I try to the do the "Create index pattern" on Kibana,
it errors out with:
Error Payload content length greater than maximum allowed: 1048576
So I need to either increase server.maxPayloadBytes on kibana.yml (https://www.elastic.co/guide/en/kibana/current/settings.html)
or figure out another way.
To increase server.maxPayloadBytes on kibana.yml, I have to edit it directly but not sure how to do that.
I have VPC endpoint for the cluster but I couldn't ssh into it.
I am running
Elasticsearch version: 6.3
Talked to DevOps, apprently changing server.maxPayloadBytes on AWS Elasticsearch Service is not possible since AWS (as of now) does not allow it
cAdvisor v0.29.0
k8s v1.9
es v6.1.2
ELK in k8s works as expected. cAdvisor also works, but fails to find ES:
Added container args:
"-storage_driver=elasticsearch",
"-storage_driver_es_host='http://elasticsearch:9200'"
Error: Failed to initialize storage driver: failed to create the elasticsearch client - no Elasticsearch node available
I had this same issue in Swarm and is not related to Kubernetes as far as I can tell. The main issue is that cAdvisor v0.29 does not contain storage drivers for Elasticsearch version 6. The version of cAdvisor you are using only includes client drivers for elasticsearch version 2 specified (on line 27) in the source here. So the error message "Failed to initialize storage driver" is saying that you are not able to connect to that ES instance because cAdvisor does not have the proper drivers for that version of Elasticsearch.
There is a GitHub issue for cAdvisor that would add drivers for Elasticsearch 5 (but not necessarily 6), but the change hasn't been merged into master branch yet.
I have Kubernetes system in AWS (kops v1.4.4) and used the following instrustions to install fluent, elasticsearch and kibana: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
I am able to see my pods logs in kibana but all kubernetes related metadata such as pod name , docker container id etc.. , locate under the same field (called tag)
Is there any other modification I need to do in order to properly integrate Kubernetes with elasticsearch and Kibana?
Thank you
I am trying to setup a multi node elastic search cluster.Any useful link which i can follow to setup cluster.
I am trying to run a map reduce programe in cluster to find out exact matches .
From my experience, if you just run the executable in two or more machines connected via a network, elasticsearch will somehow figure it out and all nodes will be added to the same cluster. I don't think you have to do anything.
This is the tutorial I've used: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html
Here you have a step by step guide on how to setup an EMR cluster with Elasticsearch and Kibana installed using the bootstrap actions mentioned before.
http://blogs.aws.amazon.com/bigdata/post/Tx1E8WC98K4TB7T/Getting-Started-with-Elasticsearch-and-Kibana-on-Amazon-EMR
The article also provides basic Elasticsearch tests on the installed cluster.
The bootstrap actions also provide the Elasticsearch-Hadoop plugin that will allow you to run Mapreduce or other Hadoop applications.
Last version of Elasticsearch Bootstrap actions are available here:
https://github.com/awslabs/emr-bootstrap-actions/tree/master/elasticsearch
The only thing to cluster two elasticsearch node is, identical cluster name of elasticsearch nodes.you can find cluster name elasticsearch.yml file.[you can find the file in config folder of elasticsearch ].The default cluster name is elasticsearch.
To change name edit the property in elasticsearch.yml
cluster.name: "custom cluster name"
Elasticsearch uses zen discovery to find the the nodes in cluster during start up.If the cluster name is identical the elasticsearch ll automatically form the cluster.
Check out this link. You need to install the Amazon Powershell but replace the variables in the script for what you want and it should launch a EMR with elasicsearch.
https://github.com/awslabs/emr-bootstrap-actions/tree/master/elasticsearch
you can use kubernetes to create a cluster of elasticsearch nodes running inside docker containers
take a look at
https://github.com/kubernetes/kubernetes/tree/master/examples/elasticsearch