KIBANA on K8S restart after plugin installation - elasticsearch

I installed the opendistro alerting plugin in my kibana running on k8s deployment from the lifecycle postart, the installation is successful , but in the UI of kibana i can't see the plugin buttons , so after searching it appears that i have to restart the kibana that is running on a pod , how can i achieve that without losing the image that has the plugin installed,because restarting the pod makes me lose the previous image and the installation happens again ,Am running kibana 7.10.2

You could try to achieve this using init containers and related stuff, basically try to find out what the install is doing and mimic those changes. A cleaner approach, also the recommended way by Elastic, is to bring your own image:
https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-kibana-plugins.html

Related

Any best way to create kibana automated snapshot to GCP storage as i am using an older version of Kibana

Any best way to create a kibana automated snapshot to GCP storage as I am using an older version of Kibana 7.7.1, Also I do not have any automated backup currently.
Kibana has Snapshot lifecycle management(SLM) that helps you do this. You have to run the Kibana with basic license
Here is a tutorial, you could also directly use the SLM API to create and automate this process along with Index-lifecycle management.

Custom plugin menu item does not show up in Kibana deployed using Kubernetes

I have created a custom kibana plugin using the instructions from the link below.
When I build and install the generated plugin on a kibana instance deployed using a pre-built version, the plugin works and the menu item shows up in the kibana UI.
However, when I install the same plugin on a kibana instance deployed using Kubernetes, the menu item does not show up in the kibana UI. The plugin is however found inside the plugins directory. Is there any additional configuration to be done for kibana running on Kubernetes?
Installation of the plugin is done with the following command.
bin/kibana-plugin install file:///tmp/testplugin-0.0.0.zip
I am using kibana version 6.8.2.

Showing crashed/terminated pod logs on Kibana

I am currently working on the ELK setup for my Kubernetes clusters. I set up logging for all the pods and fortunately, it's working fine.
Now I want to push all terminated/crashed pod logs (which we get by describing but not as docker logs) as well to my Kibana instance.
I checked on my server for those logs, but they don't seem to be stored anywhere on my machine. (inside /var/log/)
maybe it's not enabled or I might not aware where to find them.
If these logs are available in a log file similar to the system log then I think it would be very easy to put them on Kibana.
It would be a great help if anyone can help me achieve this.
You need to use kube-state-metrics by which you can get all pod related metrics. You can configure to your kube-state-metrics to connect elastic search. It will create an index for a different kind of metrics. Then you can easily use that index to display your charts/graphs in Kibana UI.
https://github.com/kubernetes/kube-state-metrics

How to set Elasticsearch 6.x password without using X-Pack

We are using Elasticsearch in a Kubernetes cluster (not exposed publicly) without X-Pack security, and had it working in 5.x with elastic/changeme, but after trying to get it set up with 6.x, it's now requiring a password, and the default of elastic/changeme no longer works.
We didn't explicitly configure it to require authentication, since it's not publicly exposed and only accessible internally, so not sure why it's requiring the password, or more importantly, how we can find out what it is or how to set/change it without using X-Pack security.
Will we end up needing to subscribe to X-Pack since we're trying to us it within a Kubernetes cluster?
Not sure how you are deploying Elasticseach in Kubernetes but we had a similar issue an ended passing this:
xpack.security.enabled=false
through the environment to the container.
If you don't use XPack at all you should use oss flavor of Elasticsearch. It includes only open source components of Elasticsearch:
docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.2
The interesting thins is, Elastic have removed any mention of it in documentation since 6.3.
See:
Docker 6.2
Docker current

reload elasticsearch after changing elasticsearch.yml

How to make elasticsearch apply new configuration?
I changed one string in file ~ES_HOME/config/elasticsearch.yml:
# Disable HTTP completely:
#
http.enabled: false
Then tried to reload elasticsearch:
elasticsearch reload
Then tried to restart elasticsearch:
elasticsearch restart
Then checked and see that http requests are still acceptable to elastic search.
So my settings are not applied.
My os is os X. ElasticSearch version is 1.2.0
Strangely or not so, the supposed way to do it is just to stop the service, and start it again :)
I.E. get its pid (running ps axww | grep elastic), and then kill ESpid ; just be sure to use the TERM signal, to give it a chance to close properly.
Some *nix elasticsearch distros have control scripts wrappers for start/stop , but I don't think OS X does.
And on a side note, you have probably found the Cluster Update Settings API, and though it provides quite a few options, regretfully it can't be used to change that particular setting.
HTH
P.S. And yep, in Windows setup the services.msc is the way to do it, but doubt this is helpful for you :)
When you have installed the current version (7.4 at the time of writing) of Elasticsearch on macOS with Homebrew, you can run:
brew services restart elastic/tap/elasticsearch-full
This will restart Elasticsearch and reload te configuration.
If ElasticSearch is installed on Windows setup then you have to restart ElasticSearch windows service.
Thanks.

Resources