Connect kibana to elasticsearch in kubernetes cluster - elasticsearch

I have a running elasticsearch cluster and I am trying to connect kibana to this cluster (same node). Currently the page hangs when I try to open the service in my browser using :. . In my kibana pod logs, the last few log messages in the pod are:
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T17:23:49Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
My kibana.yml file that is mounted into the kibana pod has the following config:
server.name: kibana-logging
server.host: 0.0.0.0
elasticsearch.url: http://elasticsearch:9300
xpack.security.enabled: false
xpack.monitoring.ui.container.elasticsearch.enabled: true
and my elasticsearch.yml file has the following config settings (I have 3 es pods)
cluster.name: elasticsearch-logs
node.name: ${HOSTNAME}
network.host: 0.0.0.0
bootstrap.memory_lock: false
xpack.security.enabled: false
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["172.17.0.3:9300", "172.17.0.4:9300", "172.17.0.4:9300"]
I feel like the issue is currently with the network.host field but I'm not sure. What fields am I missing/do I need to modify in order to connect to a kibana pod to elasticsearch if they are in the same cluster/node? Thanks!
ES Service:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
role: master
spec:
type: NodePort
selector:
component: elasticsearch
role: master
ports:
- name: http
port: 9200
targetPort: 9200
nodePort: 30303
protocol: TCP
Kibana Svc
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: default
labels:
component: kibana
spec:
type: NodePort
selector:
component: kibana
ports:
- port: 80
targetPort: 5601
protocol: TCP
EDIT:
After changing port to 9200 in kibana.yml here is what i see in the logs at the end when I try and access kibana:
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T21:36:33Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2017-10-13T21:37:02Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-*/esqueue/_search?version=true => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2017-10-13T21:37:32Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:33Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:38Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}

The issue here is that you exposed Elasticsearch on port 9200 but are trying to connect to port 9300 in your kibana.yml file.
You either need to edit your kibana.yml file to use:
elasticsearch.url: http://elasticsearch:9200
Or change the port in the elasticsearch service to 9300.

Related

Kibana 7.17.3 - serverBasePath result in 404 Status

I setup EFK Stack Using Helm Chart and versions of Elasticsearch Kibana and filebeat are 7.17.3
Helm Chart Link:
Installation is success
Able to access kibana UI (When exposed as service type Loadbalancer)
now when trying to access kibana (using existing nginx ingress and changing kibana service to clusterIP ) after setting server.basePath: "/kibana" results in a 404.
kibana.yml
server.host: "0.0.0.0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
spec:
tls:
- hosts:
- example.com
rules:
host: example.com
http:
paths:
backend:
service:
name: kibana-kibana
port:
number: 80
path: /kibana
pathType: Prefix

Cannot get elastic working via kubernetes ingress

I have the following service:
# kubectl get svc es-kib-opendistro-es-client-service -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kib-opendistro-es-client-service ClusterIP 10.233.19.199 <none> 9200/TCP,9300/TCP,9600/TCP,9650/TCP 279d
#
When I perform a curl to the IP address of the service it works fine:
# curl https://10.233.19.199:9200/_cat/health -k --user username:password
1638224389 22:19:49 elasticsearch green 6 3 247 123 0 0 0 0 - 100.0%
#
I created an ingress so I can access the service from outside:
# kubectl get ingress ingress-elasticsearch -n logging
NAME HOSTS ADDRESS PORTS AGE
ingress-elasticsearch elasticsearch.host.com 10.32.200.4,10.32.200.7,10.32.200.8 80, 443 11h
#
When performing a curl to either 10.32.200.4, 10.32.200.7 or 10.32.200.8 I am getting a openresty 502 Bad Gateway response:
$ curl https://10.32.200.7 -H "Host: elasticsearch.host.com" -k
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
$
When tailing the pod logs, I am seeing the following when performing the curl command:
# kubectl logs deploy/es-kib-opendistro-es-client -n logging -f
[2021-11-29T22:22:47,026][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [es-kib-opendistro-es-client-6c8bc96f47-24k2l] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 414554202a20485454502f312e310d0a486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d526571756573742d49443a2034386566326661626561323364663466383130323231386639366538643931310d0a582d5265212c2d49503a2031302e33322e3230302e330d0a582d466f727761726465642d466f723a2031302e33322e3230302e330d0a582d466f727761726465642d486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d466f727761721235642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d536368656d653a2068747470730d0a557365722d4167656e743a206375726c2f372e32392e300d0a4163636570743a202a2f2a0d1b0d0a
#
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: elasticsearch
name: ingress-elasticsearch
namespace: logging
spec:
rules:
- host: elasticsearch.host.com
http:
paths:
- backend:
serviceName: es-kib-opendistro-es-client-service
servicePort: 9200
path: /
tls:
- hosts:
- elasticsearch.host.com
secretName: cred-secret
status:
loadBalancer:
ingress:
- ip: 10.32.200.4
- ip: 10.32.200.7
- ip: 10.32.200.8
My service:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-kib-opendistro-es
chart: opendistro-es-1.9.0
heritage: Tiller
release: es-kib
role: client
name: es-kib-opendistro-es-client-service
namespace: logging
spec:
clusterIP: 10.233.19.199
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
- name: metrics
port: 9600
protocol: TCP
targetPort: 9600
- name: rca
port: 9650
protocol: TCP
targetPort: 9650
selector:
role: client
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
What is wrong with my setup?
By default, the ingress controller proxies incoming requests to your backend using the HTTP protocol.
You backend service is expecting requests in HTTPS though, so you need to tell nginx ingress controller to use HTTPS.
You can do so by adding an annotation to the Ingress resource like this:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Details about this annotation are in the documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.

Received plaintext http traffic on an https channel, closing connection

I have deployed ECK (using helm) on my k8s cluster and i am attempting to install elasticsearch following the docs. https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
I have externally exposed service/elasticsearch-prod-es-http so that i can connect to it from outside of my k8s cluster. However as you can see when i try to connect to it either from curl or the browser i receive an error "502 Bad Gateway" error.
curl elasticsearch.dev.acme.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
Upon checking the pod (elasticsearch-prod-es-default-0) i can see the following message repeated.
{"type": "server", "timestamp": "2021-04-27T13:12:20,048Z", "level": "WARN", "component": "o.e.x.s.t.n.SecurityNetty4HttpServerTransport", "cluster.name": "elasticsearch-prod", "node.name": "elasticsearch-prod-es-default-0", "message": "received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.5.81:9200, remoteAddress=/10.0.3.50:46380}", "cluster.uuid": "t0mRfv7kREGQhXW9DVM3Vw", "node.id": "nCyAItDmSqGZRa3lApsC6g" }
Can you help me understand why this is occuring and how to fix it?
I suspect it has something to do with my TLS configuration because when i disable TLS, im able to connect to it externally without issues. However in a production environment i think keeping TLS enabled is important?
FYI i am able to port-forward the service and connect to it with curl using the -k flag.
What i have tried
I have tried adding my domain to the section as described here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san
I have tried using openssl to generate a self signed certificate but that did not work. Trying to connect locally returns the following error message.
curl -u "elastic:$PASSWORD" "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I have tried generating a certificate using the tool https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport
bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.12 --pem
Then using the .crt and .key generated i created a kubectl secret elastic-tls-cert. But again curling localhost without -k gave the following error:
curl --cacert cacert.pem -u "elastic:$PASSWORD" -XGET "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
elasticsearch.yml
# This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-prod
namespace: elastic-system
spec:
version: 7.12.0
nodeSets:
- name: default
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.roles: ["master", "data", "ingest", "ml"]
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
xpack.security.enabled: true
podTemplate:
metadata:
labels:
# additional labels for pods
foo: bar
spec:
nodeSelector:
acme/node-type: ops
# this changes the kernel setting on the node to allow ES to use mmap
# if you uncomment this init container you will likely also want to remove the
# "node.store.allow_mmap: false" setting above
# initContainers:
# - name: sysctl
# securityContext:
# privileged: true
# command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
###
# uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification.
# automountServiceAccountToken: true
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 3
# # request 2Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
storageClassName: elasticsearch
# # inject secure settings into Elasticsearch nodes from k8s secrets references
# secureSettings:
# - secretName: ref-to-secret
# - secretName: another-ref-to-secret
# # expose only a subset of the secret keys (optional)
# entries:
# - key: value1
# path: newkey # project a key to a specific path (optional)
http:
service:
spec:
# expose this cluster Service with a LoadBalancer
type: NodePort
# tls:
# selfSignedCertificate:
# add a list of SANs into the self-signed HTTP certificate
subjectAltNames:
# - ip: 192.168.1.2
# - ip: 192.168.1.3
# - dns: elasticsearch.dev.acme.com
# - dns: localhost
# certificate:
# # provide your own certificate
# secretName: elastic-tls-cert
kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-eks-49a6c0", GitCommit:"49a6c0bf091506e7bafcdb1b142351b69363355a", GitTreeState:"clean", BuildDate:"2020-12-23T22:10:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
elastic-operator elastic-system 1 2021-04-26 11:18:02.286692269 +0100 BST deployed eck-operator-1.5.0 1.5.0
resources
pod/elastic-operator-0 1/1 Running 0 4h58m 10.0.5.142 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-0 1/1 Running 0 9m5s 10.0.5.81 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-1 1/1 Running 0 9m5s 10.0.1.128 ip-10-0-1-207.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-2 1/1 Running 0 9m5s 10.0.5.60 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/elastic-operator-webhook ClusterIP 172.20.218.208 <none> 443/TCP 26h app.kubernetes.io/instance=elastic-operator,app.kubernetes.io/name=elastic-operator
service/elasticsearch-prod-es-default ClusterIP None <none> 9200/TCP 9m5s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod,elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-prod-es-default
service/elasticsearch-prod-es-http NodePort 172.20.229.173 <none> 9200:30604/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
service/elasticsearch-prod-es-transport ClusterIP None <none> 9300/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
aws alb ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: elastic-ingress
namespace: elastic-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: "<redacted>"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <redacted>
alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=acme-aws-ingress-logs,access_logs.s3.prefix=dev-ingress
spec:
rules:
- host: elasticsearch.dev.acme.com
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: elasticsearch-prod-es-http
port:
number: 9200
# - host: kibana.dev.acme.com
# http:
# paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: kibana-prod-kb-http
# port:
# number: 5601
If anyone comes across this problem in the future, make sure your ingress is properly configured. The error message suggests that its a misconfiguration with the ingress.
received plaintext http traffic on an https channel, closing connection
In my case i am using aws-load-balancer-controller. I had to attach a annotation to my ingress that forces the connection to be HTTPS rather than HTTP.
alb.ingress.kubernetes.io/backend-protocol: "HTTPS"
For my case this problem was fixed by setting the above annotation to my ingress file and it has nothing to do with setting up a custom/private TLS certificate.
solution for me: http => https
You have to disable http ssl, for this you have to modify the config/elasticsearch.yml file and change the associated variable to false:
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12

How I can visualize elasticsearch metrics in prometheus?, both installed in a gke cluster

I have a GKE cluster with this elasticseach logging solution installed
https://console.cloud.google.com/marketplace/details/google/elastic-gke-logging
And prometheus-operator installed by helm inside the same cluster.
I would like configure a grafana dashboard for visualize metrics of my elasticsearch.
I read that elastic application from gke has the elastic_exporter installed... https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/elastic-gke-logging/README.md
But if I go to my Prometheus panel I don't see any metric about elasticsearch. I try install another elastic_exporter, but nothing.
I miss something? I forget something? Do you need to configure prometheus to read from the elastic_exporter?
I see the metrics when I do port-forwarding of the elastic_exporter, but I don't see the metrics inside prometheus panel.
# HELP elasticsearch_breakers_estimated_size_bytes Estimated size in bytes of breaker
# TYPE elasticsearch_breakers_estimated_size_bytes gauge
elasticsearch_breakers_estimated_size_bytes{breaker="accounting",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 4.6637464e+07
elasticsearch_breakers_estimated_size_bytes{breaker="fielddata",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
elasticsearch_breakers_estimated_size_bytes{breaker="in_flight_requests",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
elasticsearch_breakers_estimated_size_bytes{breaker="parent",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 4.6637464e+07
elasticsearch_breakers_estimated_size_bytes{breaker="request",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
# HELP elasticsearch_breakers_limit_size_bytes Limit size in bytes for breaker
# TYPE elasticsearch_breakers_limit_size_bytes gauge
Thank you
You are probably missing ServiceMonitor, this should work:
k apply -f -<<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
labels:
release: prom
name: elasticsearch
spec:
endpoints:
- port: metrics
selector:
matchLabels:
app: es-exporter
EOF
Your elasticsearch service must define metrics and have lable app: es-exporter, similar to this:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-exporter
component: elasticsearch
name: elasticsearch
spec:
ports:
- name: transport
port: 9200
protocol: TCP
targetPort: 9200
- name: metrics
port: 9108
protocol: TCP
targetPort: 9108
selector:
component: elasticsearch
type: ClusterIP
After that you should find metrics in Prometheus, to confirm that you can always use Status -> Targets tab in Prometheus.

Elasticsearch Kubernetes pod - can't connect to port 9300

I am trying to connect my elastic search pod, to ports 9200 and 9300. When I go to:
http://localhost:$IP_FROM_KUBECTL_PROXY(usually 8001)/api/v1/namespaces/default/pods/$POD_NAME/proxy/
I see the following error:
Error: 'net/http: HTTP/1.x transport connection broken: malformed HTTP status code "is"'
Trying to reach: 'http://172.17.0.5:9300/'
What I did is, running :
kubectl run elasticsearch --image=elasticsearch:6.6.1 -labels="elasticsearch" --env="discovery.type=single-node" --port=9200 --port=9300
and running the following service:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
spec:
selector:
host: elasticsearch
subdomain: for-kibana
app: elasticsearch
ports:
- protocol: TCP
name: serving
port: 9200
targetPort: 9200
- protocol: TCP
name: node2node
port: 9300
targetPort: 9300
It's weird, because when I just use port 9200, all works, but when I run with 9300, its fail.
Port 9300 is binary protocol (not http) and used for node communication. Only port 9200 exposed the Rest Api
From the documentation:
Both Java clients talk to the cluster over port 9300, using the native
Elasticsearch transport protocol. The nodes in the cluster also
communicate with each other over port 9300. If this port is not open,
your nodes will not be able to form a cluster.

Resources