Elasticsearch "certificate has expired" from Kibana Dev Tools - elasticsearch

I have an Elasticsearch and Kibana Helm charts deployed on my Kubernetes cluster for a couple of years now, and I've been working with Kibana's Dev Tools to query my Elasticsearch.
Since a few days ago I started to get the following error when running any query from Kibana's Dev Tools:
{"statusCode":502,"error":"Bad Gateway","message":"certificate has expired"}
But when I try using curl command or simply opening the browser and entering my Elasticsearch's url and some uri it works and I get anything I need.
Moreover, when I try to fetch the /_ssl/certificates field it says that the certificate's expiry is in about a year so I do get to see that the used certificate is valid, but still for some reason I get 'certificate expired' from the Dev Tools.
Anyone knows if there are other certificates used I should check?
Edit: Adding output of field /_ssl/certificates:
$ curl -k -u elastic:*** "https://localhost:9200/_ssl/certificates?pretty"
[
{
"path" : "/usr/share/elasticsearch/config/certs/tls.crt",
"format" : "PEM",
"alias" : null,
"subject_dn" : "CN=***, O=***, L=***, ST=***, C=***",
"serial_number" : "***",
"has_private_key" : true,
"expiry" : "2024-01-19T23:59:59.000Z"
},
{
"path" : "/usr/share/elasticsearch/config/certs/tls.crt",
"format" : "PEM",
"alias" : null,
"subject_dn" : "CN=***, O=***, L=***, ST=***, C=***",
"serial_number" : "***",
"has_private_key" : false,
"expiry" : "2024-01-19T23:59:59.000Z"
},
{
"path" : "/usr/share/elasticsearch/config/certs/tls.crt",
"format" : "PEM",
"alias" : null,
"subject_dn" : "CN=DigiCert TLS RSA SHA256 2020 CA1, O=DigiCert Inc, C=US",
"serial_number" : "***",
"has_private_key" : false,
"expiry" : "2031-04-13T23:59:59.000Z"
},
{
"path" : "/usr/share/elasticsearch/config/certs/tls.crt",
"format" : "PEM",
"alias" : null,
"subject_dn" : "CN=DigiCert Global Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US",
"serial_number" : "***",
"has_private_key" : false,
"expiry" : "2031-11-10T00:00:00.000Z"
}
]
Note: Replaced sensitive information with '***'.

It looks like SSL certificates on one or more nodes have expired. To find that node go to kibana.yml and check elasticsearch.hosts. Which node Kibana is querying, that node's certificate has expired.
You can renew the certificate with the help of this article.

Related

Not able to configure Elasticsearch snapshot repository using OCI Amazon S3 Compatibility API

My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"
PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
Respose :
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}
I hope you are aware of when to use S3 Compatible API.
"endpoint":"OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com"
Please modify OCI_TENANCY to TENANCY_NAMESPACE. Please refer to this link for more information.
You can find your tenancy namespace information in Administration -> Tenancy Details page.
Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.
If you look at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI you'll see a mention of:
The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.
AWS is migrating from path based to sub-domain based URLs for S3 (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) so the ES S3 plugin is probably defaulting to doing things the new AWS way.
Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:
{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
What I'm guessing is that having a full URL for the endpoint probably sets the protocol and path_style_access or 6.8 didn't require you to set path_style_access to true but 7.8 might. Either way, try a full URL or setting path_style_access to true. Relevant docs at https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html

Elk stack, why I can't create index of rabbitmq messages?

I recently developed a C# web app that produce and consume messages on a RabbitMQ exchange of topic type, everything is working very good. Than I decided to use the ELK stack to analyze the RabbitMQ logs and it also working very good as expected, than my troubles starts when I decided to try to log all the messages that are produced and consumed.
I followed this guide to deploy the ELK stack.
How to Install ELK Stack on Debian 9
Than my trouble started..
this is an extract of the curl -XGET 'localhost:9200'
{
"name" : "dvv7m8h",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "545-XOzEQ7W2C982ISVnng",
"version" : {
"number" : "6.8.4",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "bca0c8d",
"build_date" : "2019-10-16T06:19:49.319352Z",
"build_snapshot" : false,
"lucene_version" : "7.7.2",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
and as the official documentation states (
Rabbitmq input plugin) I need to enable the plugin by running this command bin/logstash-plugin install logstash-input-rabbitmq, but there is no bin/logstash-plugin command available for me! I tried to look nearly everywhere on the world wide web but after three days still no results. As reference I'll post my logstash config file as well.
input {
rabbitmq {
host => 'xxx.yyy.zz.nn:5672'
exchange => "my_exchange"
exchange_type => "topic"
id => "rabb"
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "rabtest-%{+YYYY.MM.dd}"
}
}
Can anyone tell me what I'm missing? Is the plugin already shipped as a bundle inside logstash v.6.8.x? Why I don't have there forementioned command to install the plugin? Thanks.

How to use document API with HTTPS Elasticsearch

I'm having a rough time figuring out how can I use this Endpoint but with HTTPS:
PUT twitter/_doc/1
{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elasticsearch"
}
So changing to https doesn't work I don't get any response at all. I'm using X-Pack. Also, I didn't configure any special user.
Any hint?

Cant connect to my proxied elasticsearch node

I'm having issues with connecting from my Go client to my es node.
I have elasticsearch behind an nginx proxy that sets basic auth.
All settings are default in ES besides memory.
Via browser it works wonderfully, but not via this client:
https://github.com/olivere/elastic
I read the docs and it says it uses the /_nodes/http api to connect. Now this is probably where I did something wrong because the response from that api looks like this:
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"nodes" : {
"u6TqFjAvRBa3_4FndfKh4w" : {
"name" : "u6TqFjA",
"transport_address" : "127.0.0.1:9300",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "5.6.2",
"build_hash" : "57e20f3",
"roles" : [
"master",
"data",
"ingest"
],
"http" : {
"bound_address" : [
"[::1]:9200",
"127.0.0.1:9200"
],
"publish_address" : "127.0.0.1:9200",
"max_content_length_in_bytes" : 104857600
}
}
}
}
I'm guessing I have to set the IPs to my actual IP/domain (my domain is like es01.somedomain.com)
So how do i correctly configure elastisearch so that my go client can connect?
My config files for nginx look similar to this: https://www.elastic.co/blog/playing-http-tricks-nginx
Edit: I found a temporary solution by setting elastic.SetSniff(false) in the Options for the client, but I think that means I can't scale ES horizontally. So still looking for an alternative.
You are looking for the HTTP options, specifically http.publish_host and http.publish_port, which should be set to the publicly reachable address and port of the Nginx server proxying the ES node.
Note that with Elasticsearch listening on 127.0.0.1:9300 for the transport, you won't be able to form a cluster with nodes on other hosts. The transport can be configured similarly with the transport options.

Elasticsearch basics : transportclient or not?

I set up a graylog stack (graylog / ES/ Mongo) everything went smooth (well almost), yesterday I tried to get some info using the following command :
curl 'http://127.0.0.1:9200/_nodes/process?pretty'
{
"cluster_name" : "log_server_graylog",
"nodes" : {
"Znz_72SZSyikw6DEC4Wgzg" : {
"name" : "graylog-27274b66-3bbd-4975-99ee-1ee3d692c522",
"transport_address" : "127.0.0.1:9350",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "2.4.4",
"build" : "fcbb46d",
"attributes" : {
"client" : "true",
"data" : "false",
"master" : "false"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 788,
"mlockall" : false
}
},
"XO77zz8MRu-OOSymZbefLw" : {
"name" : "test",
"transport_address" : "127.0.0.1:9300",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "2.4.4",
"build" : "fcbb46d",
"http_address" : "127.0.0.1:9200",
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 946,
"mlockall" : false
}
}
}
}
I does look like (to me at least that there is 2 nodes running, someone on the ES IRC told me that there might be a transport client running (which show up as a second node)...
I really don't understand why where this transport client comes from, also, the guy from IRC told me it used to be a common setup (using transport client) but this is discouraged now, how can I reverse the config to follow ES best practices ? (which I couldn't find on the docs)
FYI, my config file :
cat /etc/elasticsearch/elasticsearch.yml
cluster.name: log_server_graylog
node.name: test
path.data: /tt/elasticsearch/data
path.logs: /tt/elasticsearch/log
network.host: 127.0.0.1
action.destructive_requires_name: true
# Folowing are useless as we are defining swappiness to 1, this shloud prevent ES memeory space from being sawpped, unless emergency
#bootstrap.mlockall: true
#bootstrap.memory_lock: true
Thanks
I found the answer using the graylog IRC, the second client is the graylog client created by.... Graylog server :)
So everything is normal and as expected.

Resources