How to connect Kibana with Elastic Enterprise Search in Kubernetes - elasticsearch

I have deployed Elasticsearch, Kibana and Enterprise Search to my local Kubernetes Cluster via this official guide and they are working fine individually (and are connected to the Elasticsearch instance).
Now I wanted to setup Kibana to connect with Enterprise search like this:
I tried it with localhost, but that obviously did not work in Kubernetes.
So I tried the service name inside Kubernetes, but now I am getting this error:
The Log from Kubernetes is the following:
{"type":"log","#timestamp":"2021-01-15T15:18:48Z","tags":["error","plugins","enterpriseSearch"],"pid":8,"message":"Could not perform access check to Enterprise Search: FetchError: request to https://enterprise-search-quickstart-ent-http.svc:3002/api/ent/v2/internal/client_config failed, reason: getaddrinfo ENOTFOUND enterprise-search-quickstart-ent-http.svc enterprise-search-quickstart-ent-http.svc:3002"}
So the questions is how do I configure my kibana enterpriseSearch.host so that it will work?
Here are my deployment yaml files:
# Kibana
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
enterpriseSearch.host: 'https://enterprise-search-quickstart-ent-http.svc:3002'
# Enterprise Search
apiVersion: enterprisesearch.k8s.elastic.co/v1beta1
kind: EnterpriseSearch
metadata:
name: enterprise-search-quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
ent_search.external_url: https://localhost:3002

I encountered quite the same issue, but on a development environment based on docker-compose.
I fixed it by setting ent_search.external_url value the same as enterpriseSearch.host value
In your case, i guess, your 'Enterprise Search' deployment yaml file should look like this :
# Enterprise Search
apiVersion: enterprisesearch.k8s.elastic.co/v1beta1
kind: EnterpriseSearch
metadata:
name: enterprise-search-quickstart
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: quickstart
config:
ent_search.external_url: 'https://enterprise-search-quickstart-ent-http.svc:3002'

Related

How to configure Filebeat on ECK for kafka input?

I have Elasticsearch and Kibana running on Kubernetes. Both created by ECK. Now I try to add Filebeat to it and configure it to index data coming from a Kafka topic. This is my current configuration:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: my-filebeat
namespace: my-namespace
spec:
type: filebeat
version: 7.10.2
elasticsearchRef:
name: my-elastic
kibanaRef:
name: my-kibana
config:
filebeat.inputs:
- type: kafka
hosts:
- host1:9092
- host2:9092
- host3:9092
topics: ["my.topic"]
group_id: "my_group_id"
index: "my_index"
deployment:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
In the logs of the pod I can see entries like following
log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":2470,"time":{"ms":192}},"total":{"ticks":7760,"time":{"ms":367},"value":7760},"user":{"ticks":5290,"time":{"ms":175}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":13},"info":{"ephemeral_id":"5ce8521c-f237-4994-a02e-dd11dfd31b09","uptime":{"ms":181997}},"memstats":{"gc_next":23678528,"memory_alloc":15320760,"memory_total":459895768},"runtime":{"goroutines":106}},"filebeat":{"harvester":{"open_files":0,"running":0},"inputs":{"kafka":{"bytes_read":46510,"bytes_write":37226}}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":1.18,"15":0.77,"5":0.97,"norm":{"1":0.0738,"15":0.0481,"5":0.0606}}}}}}
And nor error entries are there. So I assume that the connection to Kafka works. Unfortunately, there no data in the my_index specified above. What do I do wrong?
I guess you are not able to connect to the Elasticsearch mentioned in the output.
As per docs, ECK secures the Elasticsearch deployed and stores it in the Kubernetes Secrets.
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html

Kubernetes Kibana operator failures and Nginx ingress timeouts

I just started implementing a Kubernetes cluster on an Azure Linux VM. I'm very new with all this. The cluster is running on a small VM (2 core, 16gb). I set up the ECK stack using their tutorial online, and an Nginx Ingress controller to expose it.
Most of the day, everything runs fine. I can access the Kibana dashboard, run Elastic queries, Nginx is working. But about once each day, something happens that causes the Kibana Endpoint matching the Kibana Service to not have any IP address. As a result, the Service can't route correctly to the container. When this happens, the Kibana pod has a status of Running, but says that 0/1 are running. It never triggers any restarts, and as a result, the Kibana dashboard becomes inaccessible. I've tried reproducing this by shutting down the Docker container, force killing the pod, but can't reliably reproduce it.
Looking at the logs on the Kibana pod, there are a bunch of errors due to timeouts. The Nginx logs say that it can't find the Endpoint for the Service. It looks like this could potentially be the source. Has anyone encountered this? Does anyone know a reliable way to prevent this?
This should probably be a separate question, but the other issue this causes is completely blocking all Nginx Ingress. Any new requests are not seen in the logs, and the logs completely stop after the message about not finding an endpoint. As a result, all URLs that Ingress is normally responsible for time out, and the whole cluster becomes externally unusable. This is fixed by deleting the Nginx controller pod, but the pod doesn't restart itself. Can someone explain why an issue like this would completely block Nginx? And why the Nginx pod can't detect this and restart?
Edit:
The Nginx logs end with this:
W1126 16:20:31.517113 6 controller.go:950] Service "default/gwam-kb-http" does not have any active Endpoint.
W1126 16:20:34.848942 6 controller.go:950] Service "default/gwam-kb-http" does not have any active Endpoint.
W1126 16:21:52.555873 6 controller.go:950] Service "default/gwam-kb-http" does not have any active Endpoint.
Any further requests timeout and do not appear in the logs.
I don't have logs for the kibana pod, but they were just consistent timeouts to the kibana service default/gwam-kb-http (same as in Nginx logs above). This caused the readiness probe to fail, and show 0/1 Running, but did not trigger a restart of the pod.
Kibana Endpoints when everything is normal
Name: gwam-kb-http
Namespace: default
Labels: common.k8s.elastic.co/type=kibana
kibana.k8s.elastic.co/name=gwam
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-11-26T16:27:20Z
Subsets:
Addresses: 10.244.0.6
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
https 5601 TCP
Events: <none>
When I run into this issue, Addresses is empty, and the pod IP is under NotReadyAddresses
I'm using the very basic YAML from the ECK setup tutorial:
Elastic (no problems here)
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: gwam
spec:
version: 7.10.0
nodeSets:
- name: default
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: elasticsearch
Kibana:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: gwam
spec:
version: 7.10.0
count: 1
elasticsearchRef:
name: gwam
Ingress for the Kibana service:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: nginx-ingress-secure-backend-no-rewrite
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/proxy-connect-timeout: "30s"
nginx.org/proxy-read-timeout: "20s"
nginx.org/proxy-send-timeout: "60s"
nginx.org/client-max-body-size: "4m"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- <internal company site>
secretName: gwam-tls-secret
rules:
- host: <internal company site>
http:
paths:
- path: /
backend:
serviceName: gwam-kb-http
servicePort: 5601
Some more environment details:
Kubernetes version: 1.19.3
OS: Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1031-azure x86_64)
edit 2:
Seems like I'm getting some kind of network error here. None of my pods can do a dnslookup for kubernetes.default. All the networking pods are running, but after adding logs to CoreDNS, I'm seeing the following:
[ERROR] plugin/errors: 2 1699910358767628111.9001703618875455268. HINFO: read udp 10.244.0.69:35222->10.234.44.20:53: i/o timeout
I'm using Flannel for my network. Thinking of trying to reset and switch to Calico and increasing nf_conntrack_max as some answers suggest.
This ended up being a very simple mistake on my part. I thought it was a pod or DNS issue, but was just a general network issue. My IP forwarding was turned off. I turned it on with:
sysctl -w net.ipv4.ip_forward=1
And added net.ipv4.ip_forward=1 to /etc/sysctl.conf

Packetbeat does not add Kubernetes metadata

I've started a minikube (using Kubernetes 1.18.3) to test out ECK and specifically packetbeat. The minikube profile is called "packetbeat" (important, as that's the hostname for the Virtualbox VM as well) and I followed the ECK quickstart to get it up and running. ElasticSearch (single node) and Kibana are running fine and packetbeat is gathering flows as well, however, I'm unable to make it add the Kubernetes metadata to the fields.
I'm working in the default namespace and created a ClusterRoleBinding to view for the default ServiceAccount in the namespace. This is working well, if I do not do that, packetbeat will report it is unable to list the Pods on the API server.
This is the Beat config I'm using to make ECK deploy packetbeat:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: packetbeat
spec:
type: packetbeat
version: 7.9.0
elasticsearchRef:
name: quickstart
kibanaRef:
name: kibana
config:
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: http
ports: [80, 8000, 8080, 9200]
- type: tls
ports: [443]
packetbeat.flows:
timeout: 30s
period: 10s
processors:
- add_kubernetes_metadata: {}
daemonSet:
podTemplate:
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: packetbeat
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
(This is mostly a slightly modified example from the ECK example page.) However, this is not working at all. I tried it with "add_kubernetes_metadata: {}" first, but that will error with the message:
2020-08-19T14:23:38.550Z ERROR [kubernetes] kubernetes/util.go:117
kubernetes: Querying for pod failed with error: pods "packetbeat" not
found {"libbeat.processor": "add_kubernetes_metadata"}
This message goes away when I add the "host: packetbeat". I'm no longer getting an error now, but I'm not getting the Kubernetes metadata either. I'm mostly interested in the namespace tag, but I'm not getting any. I do not see any additional errors in the log and it just reports monitoring details every 30 seconds at the moment.
What am I doing wrong? Any more information I can provide to help me debug this?
So the docs are just unclear. Although they do not explicitely state it, you do need to add indexers and matchers. My understanding was that there are "default" ones (as you can disable those), but that does not seem to be the case. Adding the indexers and matchers as per the example in the docs makes the Kubernetes metadata part of the data.

Ambassador Edge Stack : Working with sample project but not with my project

I am trying to configure Ambassador as API Gateway in my kubernates cluster locally.
Installation:
installed from https://www.getambassador.io/docs/latest/tutorials/getting-started/ both windows and Kubernetes part
can login with >edgectl login --namespace=ambassador localhost and see dashboard
configure with a sample project they provide from https://www.getambassador.io/docs/latest/tutorials/quickstart-demo/
Here is the YML file for deployment of demo app
apiVersion: apps/v1
kind: Deployment
metadata:
name: quote
namespace: ambassador
spec:
replicas: 1
selector:
matchLabels:
app: quote
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: quote
spec:
containers:
- name: backend
image: docker.io/datawire/quote:0.4.1
ports:
- name: http
containerPort: 8080
Everything is working as expected. Now I am trying to configure with my project. But it is not working.
So for simpler case, with keeping every configuration as the demo of Ambassador, I just modify from image: docker.io/datawire/quote:0.4.1 to image: angularapp:latest where this is a docker image of Angular 10 project.
But I am getting upstream connect error or disconnect/reset before headers. reset reason: connection failure
I spent one day with this problem. I restored my Kubernates from docker desktop app and reconfigured but no luck.
That error occurs when a mapping is valid, but the service it is pointing to cannot be reached for some reason. Is the deployment actually running (kubectl get deploy -A -o wide)? Is your angular app exposing port 8080? 8080 is a pretty common kubernetes port, but not so much in the frontend development world. If you use kubectl exec -it {{AMBASSADOR_POD}} -- sh does curl http://quote return the expected output?

How I can visualize elasticsearch metrics in prometheus?, both installed in a gke cluster

I have a GKE cluster with this elasticseach logging solution installed
https://console.cloud.google.com/marketplace/details/google/elastic-gke-logging
And prometheus-operator installed by helm inside the same cluster.
I would like configure a grafana dashboard for visualize metrics of my elasticsearch.
I read that elastic application from gke has the elastic_exporter installed... https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/elastic-gke-logging/README.md
But if I go to my Prometheus panel I don't see any metric about elasticsearch. I try install another elastic_exporter, but nothing.
I miss something? I forget something? Do you need to configure prometheus to read from the elastic_exporter?
I see the metrics when I do port-forwarding of the elastic_exporter, but I don't see the metrics inside prometheus panel.
# HELP elasticsearch_breakers_estimated_size_bytes Estimated size in bytes of breaker
# TYPE elasticsearch_breakers_estimated_size_bytes gauge
elasticsearch_breakers_estimated_size_bytes{breaker="accounting",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 4.6637464e+07
elasticsearch_breakers_estimated_size_bytes{breaker="fielddata",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
elasticsearch_breakers_estimated_size_bytes{breaker="in_flight_requests",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
elasticsearch_breakers_estimated_size_bytes{breaker="parent",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 4.6637464e+07
elasticsearch_breakers_estimated_size_bytes{breaker="request",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
# HELP elasticsearch_breakers_limit_size_bytes Limit size in bytes for breaker
# TYPE elasticsearch_breakers_limit_size_bytes gauge
Thank you
You are probably missing ServiceMonitor, this should work:
k apply -f -<<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
labels:
release: prom
name: elasticsearch
spec:
endpoints:
- port: metrics
selector:
matchLabels:
app: es-exporter
EOF
Your elasticsearch service must define metrics and have lable app: es-exporter, similar to this:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-exporter
component: elasticsearch
name: elasticsearch
spec:
ports:
- name: transport
port: 9200
protocol: TCP
targetPort: 9200
- name: metrics
port: 9108
protocol: TCP
targetPort: 9108
selector:
component: elasticsearch
type: ClusterIP
After that you should find metrics in Prometheus, to confirm that you can always use Status -> Targets tab in Prometheus.

Resources