ElasticSearch for logging in Kubernetes cannot work - elasticsearch

ElasticSearch pod is up and running , Kibana is working but cannot connect to the ElasticSearch ( connection refused in logs )
Dears,
I have 3 nodes Kubernetes cluster with 1 master and 2 workers . i am trying to deploy ElasticSearch (Testing environment only ) for logging.
I followed the details here.
I only set the replicas of the ElasticSearch to 1 instead of two because i do not have enough memory.
- I run Kibana as Nodeport and can open it but it gives error ( Cannot connect to ElasticSearch)
kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-logging ClusterIP 10.106.231.74 <none> 9200/TCP
kibana-logging NodePort 10.111.199.12 <none> 5601:30461/TCP 5h44m ClusterIP 10.96.0.10 <none>
I tried to ping from the Kibana pod to ElasticSearch pod and it is pingable
I tried to curl http://10.106.231.74:9200 ( service IP ) from all hosts and from the Kibana pod and i get everytime Failed connect to 10.106.231.74:9200; Connection refused
I tried to curl 127.0.0.1:9200 from inside the ElasticSearch pod and also connection refused.
All Yaml files are exactly as provided in above link
I expect any response at least from inside the ElasticSearch pod but all the trials returned connection refused.
The Yamls are all in link and all RBAC rules are ok.

Related

Gitea: dial tcp: lookup gitea-postgresql.default.svc.cluster.local

I see this error when trying to use Gitea with microk8s on Ubuntu 21.10:
$ k logs gitea-0 -c configure-gitea
Wait for database to become avialable...
gitea-postgresql (10.152.183.227:5432) open
...
2021/11/20 05:49:40 ...om/urfave/cli/app.go:277:Run() [I] PING DATABASE postgres
2021/11/20 05:49:45 cmd/migrate.go:38:runMigrate() [F] Failed to initialize ORM engine: dial tcp: lookup gitea-postgresql.default.svc.cluster.local: Try again
I am looking for some clues as to how to debug this please.
The other pods seem to be running as expected:
$ k get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system hostpath-provisioner-5c65fbdb4f-nfx7d 1/1 Running 0 11h
kube-system calico-node-h8tpk 1/1 Running 0 11h
kube-system calico-kube-controllers-f7868dd95-dpp8n 1/1 Running 0 11h
kube-system coredns-7f9c69c78c-cnpkj 1/1 Running 0 11h
default gitea-memcached-584956987c-zb8kp 1/1 Running 0 20s
default gitea-postgresql-0 1/1 Running 0 20s
default gitea-0 0/1 Init:1/2 1 20s
The services are not as expected, since gitea-0 is not starting:
$ k get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 11h
kube-system kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 11h
default gitea-postgresql-headless ClusterIP None <none> 5432/TCP 3m25s
default gitea-ssh ClusterIP None <none> 22/TCP 3m25s
default gitea-http ClusterIP None <none> 3000/TCP 3m25s
default gitea-memcached ClusterIP 10.152.183.15 <none> 11211/TCP 3m25s
default gitea-postgresql ClusterIP 10.152.183.227 <none> 5432/TCP 3m25s
Also see:
https://github.com/ubuntu/microk8s/issues/2741
https://gitea.com/gitea/helm-chart/issues/249
I worked through to the point where I had the logs below, specifically:
cmd/migrate.go:38:runMigrate() [F] Failed to initialize ORM engine: dial tcp: lookup gitea-postgresql.default.svc.cluster.local: Try again
Using k cluster-info dump I saw:
[ERROR] plugin/errors: 2 gitea-postgresql.default.svc.cluster.local.cisco.com. A: read udp 10.1.147.194:56647->8.8.8.8:53: i/o timeout
That led me to test the DNS with dig and 8.8.8.8. That test didn't reveal any errors, in that DNS seemed to work. Even so, DNS seemed suspect.
So then I tried microk8s enable storage dns:<IP address of DNS in lab>, whereas I was previously only using microk8s storage dns. The storage part enables the persistent volumes that the database needs.
The key piece here is the lab DNS server IP address argument when enabling DNS with microk8s.

Can not call OpenWhisk web action from browser

How can I call a web action from my browser, when my OpenWhisk deployment is on a company sandbox server?
I have an OpenWhisk deployment setup on a company sandbox server using this guide: Deploying OpenWhisk on kind. The server is running CentOS 7. I created a web action and I am able to call with curl -k https://apiHostName:apiHostPort/api/v1/web/guest/demo/hello?name=myName where apiHostName and apiHostPort are the values defined in mycluster.yaml. However, trying to access the above url from my browser returns ERR_CONNECTION_TIMED_OUT.
[root[DEV]#vx3a27 wskcluster]# kubectl -n openwhisk get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
owdev-apigateway ClusterIP 10.96.9.60 A.B.C.D 8080/TCP,9000/TCP 6d1h
owdev-controller ClusterIP 10.96.104.180 <none> 8080/TCP 6d1h
owdev-couchdb ClusterIP 10.96.40.104 <none> 5984/TCP 6d1h
owdev-kafka ClusterIP None <none> 9092/TCP 6d1h
owdev-nginx NodePort 10.96.84.3 A.B.C.D 80:31486/TCP,443:31001/TCP 6d1h
owdev-redis ClusterIP 10.96.84.177 <none> 6379/TCP 6d1h
owdev-zookeeper ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 6d1h
I have tried setting the external API of both my nginx and apigateway services as seen above, where A.B.C.D is the IP of my sandbox obtained with ifconfig. Running both curl -k and the browser with A.B.C.D yields ERR_CONNECTION_REFUSED.
What else can I try to get this to work?

Unable to access the EC2 Kubernetes dashboard

I followed How can I access to kubernetes dashboard using NodePort in a remote cluster for testing?
My Kubernetes cluster runs in Amazon EC2 instances and cluster services looks like below
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 5h54m
kube-system kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 5h53m
kube-system kubernetes-dashboard NodePort 100.68.178.51 <none> 443:31872/TCP 5h47m
$ kubectl cluster-info
Kubernetes master is running at https://api.selumalai.k8s.com
KubeDNS is running at https://api.selumalai.k8s.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
I have exposed the NodePort at 31872. If I access the dashboard in browser using
$ kubectl proxy -p 8001 &
$ curl https://api.selumalai.k8s.com:31872
Its loading forever. What am I doing wrong ?
You need to add a security group to your instance to allow traffic for port 31872.
It's explained how to do so here.
If you use kubectl proxy it can be accessed localy only, which is stated in the docs:
...
kubectl proxy
...
The UI can only be accessed from the machine where the command is executed. See kubectl proxy --help for more options.
Also here is a guide from AWS on how to Deploy the Kubernetes Dashboard (web UI).
You can try running a L7 ELB to expose the Dashboard which is explained here.

external IP not being generated for elastic

I followed the commands mentioned on this page...
https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html
elastic service is stared successfully. But I do not see external-ip
# /usr/local/bin/kubectl --kubeconfig="wzone2.yaml" get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 10m
quickstart-es ClusterIP 10.245.97.209 <none> 9200/TCP 3m11s
quickstart-es-discovery ClusterIP None <none> 9300/TCP 3m11s
I tried port forwarding command but that did not help.
kubectl port-forward service/quickstart-es 9200
How do I connect to this elastic server?
ClusterIP services are only available from inside the cluster. To make it visible from the outside you would need to change it to LoadBalancer type, and have an implementation of that available (read: be running on a cloud provider or use MetalLB).
Outside of using a LoadBalancer like #coderanger suggested, you can also use a service of type NodePort. This will allow you to connect to your service using the node IP address and without depending on cloud providers.

Connecting jaeger with elasticsearch backend storage on kubernetes cluster

I have a kubernetes cluster on google cloud platform, and on it, I have a jaeger deployment via development setup of jaeger-kubernetes templates
because my purpose is setup elasticsearch like backend storage, due to this, I follow the jaeger-kubernetes github documentation with the following actions
I've created the services via production setup options
Here are configured the URLs to access to elasticsearch server and username and password and ports
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/configmap.yml
And here, there are configured the download of docker images of the elasticsearch service and their volume mounts.
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/elasticsearch.yml
And then, at this moment we have a elasticsearch service running over 9200 and 9300 ports
kubectl get service elasticsearch [a89fbe2]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 1h
I've follow with the creation of jaeger components using the kubernetes-jaeger production templates of this way:
λ bgarcial [~] → kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/jaeger-production-template.yml
deployment.extensions/jaeger-collector created
service/jaeger-collector created
service/zipkin created
deployment.extensions/jaeger-query created
service/jaeger-query created
daemonset.extensions/jaeger-agent created
λ bgarcial [~/workspace/jaeger-elastic] at master ?
According to the Jaeger architecture, the jaeger-collector and jaeger-query services require access to backend storage.
And so, these are my services running on my kubernetes cluster:
λ bgarcial [~/workspace/jaeger-elastic] at  master ?
→ kubectl get services [baefdf9]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 3h
jaeger-collector ClusterIP 10.55.253.240 <none> 14267/TCP,14268/TCP,9411/TCP 3h
jaeger-query LoadBalancer 10.55.248.243 35.228.179.167 80:30398/TCP 3h
kubernetes ClusterIP 10.55.240.1 <none> 443/TCP 3h
zipkin ClusterIP 10.55.240.60 <none> 9411/TCP 3h
λ bgarcial [~/workspace/jaeger-elastic] at  master ?
I going to configmap.yml elastic search file kubectl edit configmap jaeger-configuration command in order to try to edit it in relation to the elasticsearch URLs endpoints (may be? ... At this moment I am supossing that this is the next step ...)
I execute it:
λ bgarcial [~] → kubectl edit configmap jaeger-configuration
And I get the following edit entry:
apiVersion: v1
data:
agent: |
collector:
host-port: "jaeger-collector:14267"
collector: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
collector:
zipkin:
http-port: 9411
query: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
span-storage-type: elasticsearch
kind: ConfigMap
metadata:
creationTimestamp: "2018-12-27T13:24:11Z"
labels:
app: jaeger
jaeger-infra: configuration
name: jaeger-configuration
namespace: default
resourceVersion: "1387"
selfLink: /api/v1/namespaces/default/configmaps/jaeger-configuration
uid: b28eb5f4-09da-11e9-9f1e-42010aa60002
Here ... do I need setup our own URLs to collector and query services, which will be connect wiht elasticsearch backend service?
How to can I setup the elasticsearch IP address or URLs here?
In the jaeger components, the query and collector need access to storage, but I don't know what is the elastic endpoint ...
Is this server-urls: http://elasticsearch:9200 a correct endpoint?
I am starting in the kubernetes and DevOps world, and I appreciate if someone can help me in the concepts and point me in the right address in order to setup jaeger and elasticsearch as a backend storage.
When you are accessing the service from the pod in the same namespace you can use just the service name.
Example:
http://elasticsearch:9200
If you are accessing the service from the pod in the different namespace you should also specify the namespace.
Example:
http://elasticsearch.mynamespace:9200
http://elasticsearch.mynamespace.svc.cluster.local:9200
To check in what namespace the service is located, use the following command:
kubectl get svc --all-namespaces -o wide
Note: Changing ConfigMap does not apply it to deployment instantly. Usually, you need to restart all pods in the deployment to apply new ConfigMap values. There is no rolling-restart functionality at the moment, but you can use the following command as a workaround:
(replace deployment name and pod name with the real ones)
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-pod-name","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}'

Resources