kubernetes go client used storage of nodes and cluster - go

I am newbie in Go. I want to get the storage statistics of nodes and cluster in kubernetes using Go code. How i can get the free and used storage of Kubernetes nodes and cluster using Go.

This is actually 2 problems:
How do I perform http requests to the Kubernetes master?
See [1] for more details. Tl;dr you can access the apiserver in at least 3 ways:
a. kubectl get nodes (not go)
b. kubectl proxy, followed by a go http client to this url
c. Running a pod in a kubernetes cluster
What are the requests I need to do to get node stats?
a. Run kubectl describe node, it should show you resource information.
b. Now run kubectl describe node --v=7, it should show you the REST calls.
I also think you should reformat the title of your question per https://stackoverflow.com/help/how-to-ask, so it reflects what you're really asking.
[1] https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md

Related

If there are 3 etcd nodes, will Apache APISIX still be able to get the configuration if 2 of them fail? Why?

In order to use APISIX I have prepared an etcd cluster with three nodes, I would like to ask that if two nodes fail, can APSIX still get the configuration normally?
Also if all the nodes fail will APISIX still work?

Can I use a single elasticsearch/kibana for multiple k8 clusters?

Do you know of any gotcha's or requirements that would not allow using a single ES/kibana as a target for fluentd in multiple k8 clusters?
We are engineering rolling out a new kubernetes model. I have requirements to run multiple kubernetes clusters, lets say 4-6. Even though the workload is split in multiple k8 clusters, I do not have a requirement to split the logging and believe it would be easier to find the logs for pods in all clusters in a centralized location. Also less maintenance for kibana/elasticsearch.
Using EFK for Kubernetes, can I point Fluentd from multiple k8 clusters at a single ElasticSearch/Kibana? I don't think I'm the first one with this thought however I haven't been able to find any discussion of doing this. Found lots of discussions of setting up efk but all that I have found only discuss a single k8 to its own elasticsearch/kibana.
Has anyone else gone down the path of using a single es/kibana to service logs from multiple kubernetes clusters? We'll plunge ahead with testing it out but seeing if anyone else has already gone down this road.
I dont think you should create an elastic instance for each kubernetes cluster, you can run a main elastic instance and index it all logs.
But even if you don`t have an elastic instance for each kubernetes client, i think you sohuld have a drp, so lets says instead moving your logs of all pods to elastic directly, maybe move it to kafka, and then split it to two elastic clusters.
Also it is very depend on the use case, if every kubernetes cluster is on different regions, and you need the pod`s logs in low latency (<1s), so maybe one elastic instance is not the right answer.
Based on [1] we can read:
Fluentd collects logs from pods running on cluster nodes, then routes
them to a central​​​​​​ized Elasticsearch.
Then Elasticsearch ingests these logs from Fluentd and stores them in a central location. It is also used to efficiently search text files.
Kibana is the UI; the user can visualize the collected logs and metrics and create custom dashboards based on queries.
There are several ways in which they can solve your dilemma:
a) Create a centralized dashboard and use each cluster’s Elasticsearch as backend. So you can see all your clusters logs in one place.
b) Create an Elasticsearch cluster and add each Elasticsearch into it. This is NOT the best option since you will duplicate your data several times, you will need to handle each index shards and you will need to fight with the split brain dilemma but it’s great for data resiliency.
c) Use another solution like an APM (New Relic, Instana, etc) to fully centralize your logs in one place.
[1] https://techbeacon.com/enterprise-it/9-top-open-source-tools-monitoring-kubernetes

Elastic search cluster on Kubernetes Cluster vs VM

I want to setup elastic stack (elastic search, logstash, beats and kibana) for monitoring my kubernetes cluster which is running on on-prem bare metals. I need some recommendations on the following 2 approaches, like which one would be more robust,fault-tolerant and of production grade. Let's say I have a K8 cluster named as K8-abc.
Approach 1- Will be it be good to setup the elastic stack outside the kubernetes cluster?
In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be fetched by beats(running on K8-abc) and put into into the ES Cluster which is configured on Linux Bare Metals via Logstash (which is also running on VMs). And for fetching the kubernetes node logs, the beats running on respective VMs (which are participating in forming the K8-abc) would fetch the logs and put it into the ES Cluster which is configured on VMs. The thing to note here is the VMs used for forming the ES Cluster are not the part of the K8-abc.
Approach 2- Will be it be good to setup the elastic stack on the kubernetes cluster k8-abc itself?
In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be send to Elastic search cluster configured on the K8-abc via logstash and beats (both running on K8-abc). For fetching the K8-abc node logs, the beats running on VMs (which are participating in forming the K8-abc) would put the logs into ES running on K8-abc via logstash which is running on k8-abc.
Can some one help me in evaluating the pros and cons of the before mentioned two approaches? It will be helpful even if the relevant links to blogs and case studies is provided.
I would be more inclined to the second solution. It has many advantages over the first one however it may seem more complex as it comes to the initial setup. You can actually ask similar question when it comes to migrate any other type of workload to Kubernetes. It has many advantages over VM. To name just a few:
self-healing cluster,
service discovery and integrated load balancing,
Such solution is much easier to scale (HPA) in comparison with VMs,
Storage orchestration. Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and many more including Dynamic Volume Provisioning mechanism.
All the above points could be easily applied to any other workload and may bee seen as Kubernetes advantages in general so let's look why to use it for implementing Elastic Stack:
It looks like Elastic is actively promoting use of Kubernetes on their website. See also this article.
They also provide an official elasticsearch helm chart so it is already quite well supported by Elastic.
Probably there are many other reasons in favour of Kubernetes solution I didn't mention here. Here you can find a hands-on article about setting up Highly Available and Scalable Elasticsearch on Kubernetes.

Elasticsearch in production with kubernetes

I am working on product in which we are using elasticsearch for search. Our production setup is in K8S (1.7.7) and we are able to scale it pretty well. Only thing I am not sure about is whether we should be hosting elasticsearch in k8s (it can go on dedicated host as well using label selector nodes) or it is advisable to host elasticsearch on VM than docker.
Our data set size is 2-3 GB and would go further. But this is the benchmark we can consider.
And elasticsearch cluster I am planning to have ti is - 3 master (with 2 as eligible master), one client node, and one data node. We can scale datanode and client node as data increases.
Is anyone did this before? thanks in advance.
IMO the best resource for Elasticsearch on Kubernetes is https://github.com/pires/kubernetes-elasticsearch-cluster
Note that while there are official Docker containers, no official solution for orchestration is being provided at the moment. This is currently covered by the community only.
3 master (with 2 as eligible master)
This doesn't make much sense. You'll want 3 master eligible nodes with the setting discovery.zen.minimum_master_nodes: 2 and one of the 3 nodes will be the actual master.

Adding node to existing cluster in Kubernetes

I have a kubernetes cluster running on 2 machines (master-minion node and minion node). I want to add a new minion node without disrupting the current set up, is there a way to do it?
I have seen that when I try to add the new node, the services on the other nodes stops it, due to which I have to stop the services before deploying the new node to the existing cluster.
To do this in the latest version (tested on 1.10.0) you can issue following command on the masternode:
kubeadm token create --print-join-command
It will then print out a new join command (like the one you got after kubeadmn init):
kubeadm join 192.168.1.101:6443 --token tokentoken.lalalalaqyd3kavez --discovery-token-ca-cert-hash sha256:complexshaoverhere
You need to run kubelet and kube-proxy on a new minion indicating api address in params.
Example:
kubelet --api_servers=http://<API_SERVER_IP>:8080 --v=2 --enable_server --allow-privileged
kube-proxy --master=http://<API_SERVER_IP>:8080 --v=2
After this you should see new node in
kubectl get no
In my case the issue was due to an existing wront Route53 "A" record.
Once it's been updated to point to internal IPs of API servers, kube-proxy was able to reach the masters and the node appeared in the list (kubectl get nodes).

Resources