Adding node to existing cluster in Kubernetes - cluster-computing

I have a kubernetes cluster running on 2 machines (master-minion node and minion node). I want to add a new minion node without disrupting the current set up, is there a way to do it?
I have seen that when I try to add the new node, the services on the other nodes stops it, due to which I have to stop the services before deploying the new node to the existing cluster.

To do this in the latest version (tested on 1.10.0) you can issue following command on the masternode:
kubeadm token create --print-join-command
It will then print out a new join command (like the one you got after kubeadmn init):
kubeadm join 192.168.1.101:6443 --token tokentoken.lalalalaqyd3kavez --discovery-token-ca-cert-hash sha256:complexshaoverhere

You need to run kubelet and kube-proxy on a new minion indicating api address in params.
Example:
kubelet --api_servers=http://<API_SERVER_IP>:8080 --v=2 --enable_server --allow-privileged
kube-proxy --master=http://<API_SERVER_IP>:8080 --v=2
After this you should see new node in
kubectl get no

In my case the issue was due to an existing wront Route53 "A" record.
Once it's been updated to point to internal IPs of API servers, kube-proxy was able to reach the masters and the node appeared in the list (kubectl get nodes).

Related

How to change cluster IP in a replication controller run time

I am using Kubernetes 1.0.3 where a master and 5 minion nodes deployed.
I have an Elasricsearch application that is deployed on 3 nodes using a replication controller and service is defined.
Now i have added a new minion node to the cluster and wanted to run the container elasticsearch on the new node.
I am scaling my replication controller to 4 so that based on the node label the elasticsearch container is deployed on new node.Below is my issue and please let me k ow if there is any solution ?
The cluster IP defined in the RC is wrong as it is not the same in service.yaml file.Now when I scale the RC new node is installed with the ES container pointing to the wrong Cluster IP due to which the new node is not joining the ES cluster.Is there any way that I can modify the cluster IP of deployed RC so that when I scale the RC the image is deployed on new node with the correct cluster IP ?
Since I am using old version I don't see kubectl edit command and I tried changing using kubectl patch command but the IP didn't change.
The problem is that I need to do this on a production cluster so I can't delete the existing pods but only option is to change the cluster IP of deployed RC and then scale so that it will take the new IP and image is started accordingly.
Please let me know if any way I can do this ?
Kubernetes creates that (virtual) ClusterIP for every service.
Whatever you defined in your service definition (which you should have posted along with your question) is being ignored by Kubernetes, if I recall correctly.
I don't quite understand the issue with scaling, but basically, you want to point at the service name (resolved by Kubernetes's internal DNS) rather than the ClusterIP.
E.g., http://myelasticsearchservice instead of http://1.2.3.4

Starting a node in emqtt and creating cluster

I am new to emqtt and erlang. Using the documentation provided in emqtt.io I configured the emqtt in my machine and wanted to create a cluster.
I followed the steps given below to create a node
erl -name node1#127.0.0.1
erl -name node2#127.0.0.1
And to connect these nodes i used the below command.
(node1#127.0.0.1)1> net_kernel:connect_node('node2#127.0.0.1')
I am not getting any response(true or false) after executing this command.
Also I tried the following command
./bin/emqttd_ctl cluster emqttd#192.168.0.10
but got a failure message
Failed to join the cluster: {node_down,'node1#127.0.0.1'}
When I hit the URL localhost:8080/status I am getting the following message
Node emq#127.0.0.1 is started
emqttd is running
But i couldn't get any details about the cluster.
Am I following the right steps?. Need help on the creation of cluster in emqtt.
Thanks in advance!!
For each node that is created in a machine a separate process is initiated and on creating many bodes will finally end up with using the memory the most which leads to a situation where you will not be able to join any nodes in a cluster. Hence to join we have to stop the nodes that are not in use using the ./emqttd stop command
You need two emqx nodes running on different machine, as the port may conflicts with each other on the same machine.
And the node names MUST not use loopback ip address 127.0.0.1 such as node1#127.0.0.1.

Set up mesosphere chronos cluster

I am using chronos as timer service and need set up a cluster in case one of them goes down unexpectedly. I set up mesos master/slaves, zookeeper, and added mesos master/zookeeper addresses to each chronos node. What I got finally:
1. each chronos node shared the same jobs data
2. one chronos node as a framework was registered to mesos master
3. I ran curl -IL for each node but didn't get redirected to the leading node. As the doc (https://mesos.github.io/chronos/docs/faq.html#which-node) says, I should be redirected.
By following the clustering guide (https://github.com/Metaswitch/chronos/blob/dev/doc/clustering.md), I created the chronos_cluster.conf and restarted all nodes, nothing changed. I guess I failed to get the chronos cluster running correctly. Did I missing something or did anything wrong? I didn't found a guide on http://mesos.github.io/chronos/docs/. Thanks!
Resolved. In fact all nodes share same zookeeper, then they run on a cluster. I saw the log message saying "INFO Proxying request to ip-xxx-xxx-xxx-xxx:4400 . (org.apache.mesos.chronos.scheduler.api.RedirectFilter:37)"

High availability issue with rethinkdb cluster in kubernetes

I'm setting up rethinkdb cluster inside kubernetes, but it doesn't work as expected for high availability requirement. Because when a pod is down, kubernetes will creates another pod, which runs another container of the same image, old mounted data (which is already persisted on host disk) will be erased and the new pod will join the cluster as a brand new instance. I'm running k8s in CoreOS v773.1.0 stable.
Please correct me if i'm wrong, but that way it seems impossible to setup a database cluster inside k8s.
Update: As documented here http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy, if RestartPolicy: Always it will restart the container if exits failure. It means by "restart" that it brings up the same container, or create another one? Or maybe because I stop the pod via command kubectl stop po so it doesn't restart the same container?
That's how Kubernetes works, and other solution works probably same way. When a machine is dead, the container on it will be rescheduled to run on another machine. That other machine has no state of container. Event when it is the same machine, the container on it is created as a new one instead of restarting the exited container(with data inside it).
To persistent data, you need some kind of external storage(NFS, EBS, EFS,...). In case of k8s, you may want to look into this https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md This Github issue also has many information https://github.com/kubernetes/kubernetes/issues/6893
And in deed, that's the way to achieve HA in my opinion. Container are all stateless, they don't hold anything inside them. Any configuration needs for them should be store outside such as using thing like Consul or Etcd. By separating this like this, it's easier to restart a container
Try using PetSets http://kubernetes.io/docs/user-guide/petset/
That allows you to name your (pet) pods. If a pod is killed, then it will come back with the same name.
Summary of the petset feature is as follows.
Stable hostname
Stable domain name
Multiple pets of a similar type will be named with a "-n" (rethink-0,
rethink-1, ... rethink-n for example)
Persistent volumes
Now apps can cluster/peer together
When a pet pod dies, a new one will be started and will assume all the same "state" (including disk) of the previous one.

kubernetes go client used storage of nodes and cluster

I am newbie in Go. I want to get the storage statistics of nodes and cluster in kubernetes using Go code. How i can get the free and used storage of Kubernetes nodes and cluster using Go.
This is actually 2 problems:
How do I perform http requests to the Kubernetes master?
See [1] for more details. Tl;dr you can access the apiserver in at least 3 ways:
a. kubectl get nodes (not go)
b. kubectl proxy, followed by a go http client to this url
c. Running a pod in a kubernetes cluster
What are the requests I need to do to get node stats?
a. Run kubectl describe node, it should show you resource information.
b. Now run kubectl describe node --v=7, it should show you the REST calls.
I also think you should reformat the title of your question per https://stackoverflow.com/help/how-to-ask, so it reflects what you're really asking.
[1] https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md

Resources