Change or set an advertise address in Microk8s - microk8s

I am creating a cluster in MicroK8s, however when using Kubernetes I am able to specify an API Server Advertise Address that I can use to join my cluster. I need that functionality in MicroK8s, however I haven't found a way that works.
Thus far I have modified the file at /var/snap/microk8s/current/certs/csr.conf with a new ip IP.5 MYIPV6 whilst also modifying the output of microk8s.kubectl config view --rawand using it in as the KubeConfig. Nothing has worked.
In general terms, I want to expose a public IPV6 Address so worker nodes can join my cluster.

Related

Consul: get address of a service from a request

When registering a service in Consul I need to pass Address. But to do so I need to know this address in the first place. This is not always a trivial task if you have multiple network interfaces.
Is there a way to use the source address from the request itself? Wherever it came from just take the source address and use it
The service catalog is a... catalog, the address that services are registered should be accessible by whoever queries the catalog.
I don't believe there's an automatic solution for this, but you can:
register the service multiple times with different tags for the different network interfaces, query the relevant tag.
register the service multiple times with different service names for the different network interfaces. e.g. (myservice-lan1, myservice-lan2). query the relevant service name.
run multiple consul clusters, set with different datacenters and use each subnet as a different datacenter.

why do we need to setup a publish address[network.host] value

Looks like elastic search is not discoverable without setting the box's ip address in this property : network.host .
Why cant it just bind to the box's ip address(like it happens in application servers like rest apps).
Why is there even a provision to bind to a particular ip address?
The key property that matters is network.publish_host. You configure this indirectly via network.host. The publish host is the address that nodes advertise to other nodes as the address to be reached on when they join the cluster. So, it needs to be something that is reachable from the other nodes. E.g. 127.0.0.1 would not work for this; likewise a loadbalanced address won't work either.
Also see documentation for these properties
Many servers have multiple network interfaces and a common problem before this change was Elasticsearch picking the wrong one for the publish host and then failing to cluster because the nodes ended up advertising the wrong address to each other. Since Elasticsearch cannot know the right interface, you have to tell it.
This change has been introduced in 2.0 as explained in the breaking changes > network changes documentation:
This change prevents Elasticsearch from trying to connect to other nodes on your network unless you specifically tell it to do so. When moving to production you should configure the network.host parameter
The ES folks also released a blog article back then to explain the underlying reasons for this change, i.e. mainly to prevent your node from accidentally binding to another cluster available on the network.
To run on a local network a single node I added these to my
un-comment or comment
elasticsearch.yml
http.port: 9201
http.bind_host: 192.168.1.172 #works
or
http.port: 9201
http.publish_host: 192.168.1.172 #by itself does not work
http.host: 192.168.1.172 #works alone

Elasticsearch-logging rc and svc are getting automatically deleted

https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
The cluster is getting automatically deleted by using these configs to create the cluster.
From https://github.com/kubernetes/kubernetes/issues/11435 the solution is to remove
kubernetes.io/cluster-service: "true"
Though without these the elasticsearch is not available through the kubernetes master.
Should i create a pull request to remove the line from the files in the repo so people dont get confused?
Firstly, I'd recommend reformatting future questions so they adhere to the stack overflow guidelines: https://stackoverflow.com/help/how-to-ask.
I'd recommend making Elasticsearch a normal Kubernetes Service. You can expose in one of the following ways:
1. Set service.Type = NodePort and access it via any public ip of node:nodePort
2. Set service.Type = LoadBalancer, this will only work on cloud providers that have loadbalancers
3. Expose the RC directly through a host port (not recommended)
Those are just the common options for accessing a Service, please see the following thread for a more detailed discussion: https://groups.google.com/forum/#!topic/kubernetes-sig-network/B-A_RuqpFWk
It's generally not a good idead to send all external traffic meant for a Kubernetes service through the apiserver. However if you must do so, you can via an endpoint such as:
/api/v1/proxy/namespaces/default/services/nginx:80/
Where default is the namespace, nginx is the name of your service and 80 is the service port (needed to disambiguate multiport services).

How to restart single node hadoop cluster on ec2

I have installed a single node haodoop cluster on using Hortonworks/Ambari on Amazon's ec2 host.
Since I don't want this cluster running 24/7, I stop the instance when done. When I reboot the instance later, I get a new IP address and then ambari no longer is able to start the Hadoop related services.
Is there a way other than completely redeploying to reconfigure the cluster so the services will start?
It looks like the IP address lives in various xml files under /etc, in the postgres database table ambari, and possibly other places I haven't found yet.
I tried updating the xml files and postgres database with updated versions of the ip address, internal and external dns names as I could find them, but to no avail. I have not been able to restart the services.
The reason I am doing this is to possibly save the deployment time and data configuration on hdfs and other project specific setup each time I restart the host.
Any suggestions?
Thanks!
Elastic IP can be used. Also, since you mentioned it being a single node cluster - you can use localhost or private IP.
If you use elastic IP, your UIs will always be on the same public IP. However, if you use private IP or localhost and do not associate your instance with an elastic IP you will have to look for public IP everytime you start the instance and then connect to the web UI using the IP.
Thanks for the help, both Harman and TJ are correct. I haven't used an elastic IP because I might have more than one of these running and a time, and for now at least, I don't mind looking up the public ip address.
Harman's suggestion of using "localhost" as the fqdn when setting up ambari in the first place is a really good idea in retrospect. Unless I go through the whole setup again, that's water under the bridge for me, but I recommend this to others who might read this post.
In my case, I figured this out on my own before coming back to the page. The specific step I took was insanely simple after all, thanks to Occam's Razor.
I added the following line in /etc/hosts:
<new internal IP> <old internal dns name>
and then did
ambari-server restart. from the command line. Then I am able to restart all services after logging into ambari.

Can't join into a cluster on marklogic

I'm working with marklogic database and I tried to create a cluster.
I already have a development key. The OS is the same in all the nodes (win 7 x64).
When you tried to add a node into the cluster, you need to type the host name or the IP adress. For some reason when I type de host name, marklogic sometimes can't find the node , but that doesn't matter, because with the IP, the connection is successfull.
The main problem is when continues trought the process. At the end when marklogic try to transfer cluster configuration information to the new host, the process never ends and finally a message like "No data received" appear in the web browser.
I know that this message doesnt mean that the process fails, because when I change for example the host name, the same message appear.
So, when I check the summary in the first node, the second node appears, so that means the node "joins" into the cluster, but I'm not able to start the admin interface and always the second node appears disconnected even if I restart the service.
Aditionally, I'm able to make a ping from any computer to another.
I tried to create another network, because in my school some ports are not allowed, furthermore I tried to use different development key and the same key in my nodes too,
and finally I already have all the services enabled, but the problem persist.
Any help or comments would be appreciated.
Make sure ports 7998 - 8003 are open on both computers for both inbound and outbound traffic and that you don't have a firewall (Windows firewall, or iptables) blocking these.
You can also start looking into the Logs/ErrorLog.txt file and see if something obvious shows up.
Stick to IP addresses for now as it seems your DNS isn't fully working.
Your error looks like a kind of networking connectivity problem between the hosts.
Also you might get more detailed, or atleast different, answers from the MarkLogic developer mailing list.
http://developer.marklogic.com/discuss
-David Lee
Make sure the host names in MarkLogic configuration match the DNS names at which the hosts can see each other. If those are unreliable, then simply use IP addresses as host names. Go to the Admin interface on both ends, lookup the host name, change the DNS name into IP name, try again.
Also look at DALDEI's suggestion about ports and firewalls, that could be interfering as well.
HTH!

Resources