Elasticsearch Kubernetes pod - can't connect to port 9300 - elasticsearch

I am trying to connect my elastic search pod, to ports 9200 and 9300. When I go to:
http://localhost:$IP_FROM_KUBECTL_PROXY(usually 8001)/api/v1/namespaces/default/pods/$POD_NAME/proxy/
I see the following error:
Error: 'net/http: HTTP/1.x transport connection broken: malformed HTTP status code "is"'
Trying to reach: 'http://172.17.0.5:9300/'
What I did is, running :
kubectl run elasticsearch --image=elasticsearch:6.6.1 -labels="elasticsearch" --env="discovery.type=single-node" --port=9200 --port=9300
and running the following service:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
spec:
selector:
host: elasticsearch
subdomain: for-kibana
app: elasticsearch
ports:
- protocol: TCP
name: serving
port: 9200
targetPort: 9200
- protocol: TCP
name: node2node
port: 9300
targetPort: 9300
It's weird, because when I just use port 9200, all works, but when I run with 9300, its fail.

Port 9300 is binary protocol (not http) and used for node communication. Only port 9200 exposed the Rest Api
From the documentation:
Both Java clients talk to the cluster over port 9300, using the native
Elasticsearch transport protocol. The nodes in the cluster also
communicate with each other over port 9300. If this port is not open,
your nodes will not be able to form a cluster.

Related

Cannot get elastic working via kubernetes ingress

I have the following service:
# kubectl get svc es-kib-opendistro-es-client-service -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-kib-opendistro-es-client-service ClusterIP 10.233.19.199 <none> 9200/TCP,9300/TCP,9600/TCP,9650/TCP 279d
#
When I perform a curl to the IP address of the service it works fine:
# curl https://10.233.19.199:9200/_cat/health -k --user username:password
1638224389 22:19:49 elasticsearch green 6 3 247 123 0 0 0 0 - 100.0%
#
I created an ingress so I can access the service from outside:
# kubectl get ingress ingress-elasticsearch -n logging
NAME HOSTS ADDRESS PORTS AGE
ingress-elasticsearch elasticsearch.host.com 10.32.200.4,10.32.200.7,10.32.200.8 80, 443 11h
#
When performing a curl to either 10.32.200.4, 10.32.200.7 or 10.32.200.8 I am getting a openresty 502 Bad Gateway response:
$ curl https://10.32.200.7 -H "Host: elasticsearch.host.com" -k
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
$
When tailing the pod logs, I am seeing the following when performing the curl command:
# kubectl logs deploy/es-kib-opendistro-es-client -n logging -f
[2021-11-29T22:22:47,026][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [es-kib-opendistro-es-client-6c8bc96f47-24k2l] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 414554202a20485454502f312e310d0a486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d526571756573742d49443a2034386566326661626561323364663466383130323231386639366538643931310d0a582d5265212c2d49503a2031302e33322e3230302e330d0a582d466f727761726465642d466f723a2031302e33322e3230302e330d0a582d466f727761726465642d486f73743a20656c61737469637365617263682e6f6e696f722e636f6d0d0a582d466f727761721235642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d536368656d653a2068747470730d0a557365722d4167656e743a206375726c2f372e32392e300d0a4163636570743a202a2f2a0d1b0d0a
#
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: elasticsearch
name: ingress-elasticsearch
namespace: logging
spec:
rules:
- host: elasticsearch.host.com
http:
paths:
- backend:
serviceName: es-kib-opendistro-es-client-service
servicePort: 9200
path: /
tls:
- hosts:
- elasticsearch.host.com
secretName: cred-secret
status:
loadBalancer:
ingress:
- ip: 10.32.200.4
- ip: 10.32.200.7
- ip: 10.32.200.8
My service:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-kib-opendistro-es
chart: opendistro-es-1.9.0
heritage: Tiller
release: es-kib
role: client
name: es-kib-opendistro-es-client-service
namespace: logging
spec:
clusterIP: 10.233.19.199
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
- name: metrics
port: 9600
protocol: TCP
targetPort: 9600
- name: rca
port: 9650
protocol: TCP
targetPort: 9650
selector:
role: client
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
What is wrong with my setup?
By default, the ingress controller proxies incoming requests to your backend using the HTTP protocol.
You backend service is expecting requests in HTTPS though, so you need to tell nginx ingress controller to use HTTPS.
You can do so by adding an annotation to the Ingress resource like this:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Details about this annotation are in the documentation:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI
By default NGINX uses HTTP.

How to exposing CockroachDB using Ingress on Google Cloud for external loadtesting

For my current distributed databases project in my studies I should deploy a CockrouchDB Cluster on Google Cloud Kubernetes Engine and run a YCSB Loadtest against it.
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
Now I need to expose the DB Console on Port 8080 as well as the Database Endpoint on Port 26257.
so far I started changing the cockraochdb-public service to kind: NodePort and exposing its ports using an Ingress. My current Problem is exposing both ports (if possible on their default ports 8080 and 26257) and having them accessible from YCSB.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
labels:
app: cockroachdb
spec:
backend:
serviceName: cockroachdb-public
servicePort: 26257
rules:
- http:
paths:
- path: /labs/*
backend:
serviceName: cockroachdb-public
servicePort: 8080
- path: /*
backend:
serviceName: cockroachdb-public
servicePort: 26257
So far I just managed to route it to different paths. I'm not sure if this may work, because the JDBC driver used by YCSB is using TCP not http.
How do I expose two ports of one service using an Ingress for TCP?
Focusing on:
How do I expose two ports of one service using an Ingress for TCP?
In general when an Ingress resource is referenced it's for HTTP/HTTPS traffic.
You cannot expose the TCP traffic with an Ingress like the one mentioned in your question.
Side note!
There are some options to use an Ingress controller to pass the TCP/UDP traffic (nginx-ingress).
You could expose your application with service of type LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: cockroach-db-cockroachdb-public
namespace: default
spec:
ports:
- name: grpc
port: 26257
protocol: TCP
targetPort: grpc # (containerPort: 26257)
- name: http
port: 8080
protocol: TCP
targetPort: http # (containerPort: 8080)
selector:
selector: INSERT-SELECTOR-FROM-YOUR-APP
type: LoadBalancer
Disclaimer!
Above example is taken from cockroachdb Helm Chart with modified value:
service.public.type="LoadBalancer"
By above definition you will expose your Pods to external traffic on ports: 8080 and 26257 with a TCP/UDP LoadBalancer. You can read more about it by following below link:
Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps: Creating a service of type LoadBalancer
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
If this VM is located in GCP infrastructure you could also take a look on Internal TCP/UDP LoadBalancer:
Cloud.google.com: Kubernetes Engine: Using an internal TCP/UDP load balancer
Also I'm not sure about the annotations of your Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
In GKE when you are creating an Ingress without specifying the ingress.class you are using: gce controller. The ingress.citrix.com annotations are specific to citrix controller and will not work with gce controller.
Additional resources:
Kubernetes.io: Docs: Ingress
Kubernetes.io: Docs: Service

Call a rest api from one pod to another pod in same kubernetes cluster

In my k8s cluster I have two pods podA and podB. Both are in same k8s cluster. Microservice on pod B is a spring boot rest api. Microservice on pod A have ip and port of pod B in its application.yaml. now every time when podB recreates, ip change which forces us to change ip in application.yml of podA. Please suggest a better way.
My limitation is : I can't change the code of podA.
A Service will provide a consistent DNS name for accessing Pods.
An application should never address a Pod directly unless you have a specific reason to (custom load balancing is one I can think of, or StatefulSets where pods have an identity).
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
You will then have a consistent DNS name to access any Pods that match the selector:
my-service.default.svc.cluster.local
That's what service are for. Take a postgres service:
kind: Service
apiVersion: v1
metadata:
name: postgres-service
spec:
type: ClusterIP
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
You can use postgres-service in other pods instead of referring to the ip address of the pod. You also have the advantage that k8s's doing some load balancing for you as well.

How to configure a GKE load balancer for a golang tcp server?

After deploying a golang server container and gke load balancer I can successfully connect to the external ip of the load balancer, but no data reaches the server container.
It works as expected when I run the server container locally and point the client at localhost. I changed it to serve http requests and it worked fine with the same kubernetes manifests. However, if I try to serve both tcp and http (on different ports) then neither work on gke but again works fine locally. So I suspect it probably has something to do with either how I configured the load balancer or how I'm listening for tcp connections in the server breaks something when running on gke but not locally.
K8s Service Manifest
apiVersion: v1
kind: Service
metadata:
name: steel-server-service
spec:
type: LoadBalancer
selector:
app: steel-server
ports:
- protocol: TCP
name: tcp
port: 12345
targetPort: 12345
K8s Deployment Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: steel-server-deployment
labels:
app: steel-server
spec:
replicas: 1
selector:
matchLabels:
app: steel-server
template:
metadata:
labels:
app: steel-server
spec:
containers:
- name: steel-server
image: gcr.io/<my-project-id>/steel-server:latest
ports:
- containerPort: 12345
name: tcp
Relevent Go TCP Server Code
server, err := net.Listen("tcp", ":12345")
if err != nil {
log.Fatalln("Couldn't start up tcp server: ", err)
}
Can you try first with kubectl get svc so you get to know which ports are open with load balancer as you got external ip from GCP as type:loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: steel-server-service
spec:
type: LoadBalancer
selector:
app: steel-server
ports:
- protocol: TCP
name: tcp
port: 80
targetPort: 12345
Try with this service config i change port to 80 while target port of same as container port.

Connect kibana to elasticsearch in kubernetes cluster

I have a running elasticsearch cluster and I am trying to connect kibana to this cluster (same node). Currently the page hangs when I try to open the service in my browser using :. . In my kibana pod logs, the last few log messages in the pod are:
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T17:23:46Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T17:23:49Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
My kibana.yml file that is mounted into the kibana pod has the following config:
server.name: kibana-logging
server.host: 0.0.0.0
elasticsearch.url: http://elasticsearch:9300
xpack.security.enabled: false
xpack.monitoring.ui.container.elasticsearch.enabled: true
and my elasticsearch.yml file has the following config settings (I have 3 es pods)
cluster.name: elasticsearch-logs
node.name: ${HOSTNAME}
network.host: 0.0.0.0
bootstrap.memory_lock: false
xpack.security.enabled: false
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["172.17.0.3:9300", "172.17.0.4:9300", "172.17.0.4:9300"]
I feel like the issue is currently with the network.host field but I'm not sure. What fields am I missing/do I need to modify in order to connect to a kibana pod to elasticsearch if they are in the same cluster/node? Thanks!
ES Service:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
role: master
spec:
type: NodePort
selector:
component: elasticsearch
role: master
ports:
- name: http
port: 9200
targetPort: 9200
nodePort: 30303
protocol: TCP
Kibana Svc
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: default
labels:
component: kibana
spec:
type: NodePort
selector:
component: kibana
ports:
- port: 80
targetPort: 5601
protocol: TCP
EDIT:
After changing port to 9200 in kibana.yml here is what i see in the logs at the end when I try and access kibana:
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","#timestamp":"2017-10-13T21:36:30Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2017-10-13T21:36:33Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2017-10-13T21:37:02Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-*/esqueue/_search?version=true => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2017-10-13T21:37:32Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:33Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:38Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2017-10-13T21:37:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
The issue here is that you exposed Elasticsearch on port 9200 but are trying to connect to port 9300 in your kibana.yml file.
You either need to edit your kibana.yml file to use:
elasticsearch.url: http://elasticsearch:9200
Or change the port in the elasticsearch service to 9300.

Resources