I have a cluster of Elasticsearch nodes running on different AWS EC2 instances. They internally connect via a network within AWS, so their network and discovery addresses are set up within this internal network. I want to use the python elasticsearch library to connect to these nodes from the outside. The EC2 instances have static public IP addresses attached, and the elastic instances allow https connections from anywhere. The connection works fine, i.e. I can connect to the instances via browser and via the Python elasticsearch library. However, I now want to set up sniffing, so I set up my Python code as follows:
self.es = Elasticsearch([f'https://{elastic_host}:{elastic_port}' for elastic_host in elastic_hosts],
sniff_on_start=True, sniff_on_connection_fail=True, sniffer_timeout=60,
sniff_timeout=10, ca_certs=ca_location, verify_certs=True,
http_auth=(elastic_user, elastic_password))
If I remove the sniffing parameters, I can connect to the instances just fine. However, with sniffing, I immediately get elastic_transport.SniffingError: No viable nodes were discovered on the initial sniff attempt upon startup.
http.publish_host in the elasticsearch.yml configuration is set to the public IP address of my EC2 machines, and the /_nodes/_all/http endpoint returns the public IPs as the publish_address (i.e. x.x.x.x:9200).
We have localized this problem to the elasticsearch-py library after further testing with our other microservices, which could perform sniffing with no problem.
After testing with our other microservices, we found out that this problem was related to the elasticsearch-py library rather than our elasticsearch configuration, as our other microservice, which is golang based, could perform sniffing with no problem.
After further investigation we linked the problem to this open issue on the elasticsearch-py library: https://github.com/elastic/elasticsearch-py/issues/2005.
The problem is that the authorization headers are not properly passed to the request made to Elasticsearch to discover the nodes. To my knowledge, there is currently no fix that does not involve altering the library itself. However, the error message is clearly misleading.
Related
I have an idea to implement and i need to create a cache for the pods IPs in a kubernetes cluster and then use an http request to access the cached IPs.
I'm using Golang and as i'm new in this field i would be so grateful if anyone have an idea how to implement that. I searched a lot in internet but i didn't find any simple examples to use as a start.
I started with a piece of code to get the podlist what i need is to put he podlist in a cache, like that each time a request arrives it will use the cache insead of using the kubernetes api o get the IPs.
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
fmt.Printf("Error building kubernetes clientset: %v\n", err)
os.Exit(2)}
options := metav1.ListOptions{
LabelSelector: "app=hello",}
podList, _ := kubeClient.CoreV1().Pods("namespace").List(options). What i need is to create a cache for the IPs of hello pods for-example and when an http request arrive to my http server, the request will use directly the cached IPs
I appreciate your help.
Thank you in advance,
There's only one correct answer to such question: Leave it. It's a very, very bad idea. Believe me, with such solution you'll only generate more problems that you will need to solve. What about updating such cache when a Pod gets recreated and both its name and IP changes ?
Pod in kubernetes is an object of ephemeral nature and it can be destroyed and recreated in totally normal circumstances e.g. as a result of scaling down the cluster and draining the node Pods are evicted and rescheduled on a different node, with completely different names and IP addresses.
The only stable manner of accessing your Pods is via a Service that exposes them.
To minimize the latency when receiving each time a new request i need
to use the Ips from a cache instead of trying to get the ip of the pod
from the kubernetes API
It's really reinventing the wheel. Every time a Service is created (except Services without selectors), the corresponding Endpoints object is created as well. And in fact it acts exactly like the caching mechanism you need. It keeps track of all IP addresses of Pods and gets updated if a Pod gets recreated and its IP changes. This way you have a guarantee that it is always up to date. When implementing any cache mechanism you would need to call the kubernetes API anyway, to make sure that a Pod with such IP still exists and if it doesn't, what was created instead of it, with what name, with what IP address. Quite bothersome, isn't it ?
And it is not true that each time you access a Pod you need to make a call to kubernetes API to get its IP address. In fact,Service is implemented as a set of iptables rules on each node. When the request hits the virtual IP of the Service it actually falls into specific iptables chain and gets routed by the the kernel to the backend Pod. Kubernetes networking is really wide topic, so I recommend you to read about it e.g. using the resources I attached in the end, but not going into unnecessary details it's worth mentioning that each time cluster configuration changes (e.g. some Pod is recreated and gets a different IP and the respective Endpoints object, that keeps track of Pods IPs for the specific Service, changes), kube-proxy which runs as a container on every node takes care of updating the above mentioned iptables forwarding rules. If running in iptables mode (which is the most common implementation) kube-proxy configures Netfilter chains so the connection is routed directly to the backend container’s endpoint by the node’s kernel.
So API call is made only when the cluster configuration changes so that kube-proxy can update iptables rules. Normally when you're accessing the Pod via a Service, the traffic is routed to backend Pods based on current iptables rules without a need for asking kubenetes API about the IP of such Pod.
In fact, Krishna Chaurasia have already answered your question (shortly but 100% correct) by saying:
You should not access pod by their IPs. They are not persisted across
pod restarts.
and
that's not how K8s work. Requests are forwarded based on the Service
generally and they are redirected towards the matching pods based on
label/selectors. – Krishna Chaurasia 4 hours ago
I can only agree with that. And my reasons for "why" have been explained in detail above.
Additional resources:
Kubernetes Networking Demystified: A Brief Guide
iptables: How Kubernetes Services Direct Traffic to Pods
Kubernetes: Service, load balancing, kube-proxy, and iptables
Network overview on GCP docs or more specifically very nice explanation how kube-proxy works
I have a .NET Kafka client (using librdkafka via a Confluent's .NET client) running on a physical server with two network interfaces active. One is 10G and the other is 1G, both of them have static IP addresses assigned. Our networking team handles the configurations and is unlikely to change their practices for one application so I'd like to handle this client-side. I should also mention that the 1G interface and 10G interfaces are on the same network.
Since my Kafka cluster (3-node) is all 10G, I would like to require my application's consumer to bind to the 10G IP address. Looking through all of the documentation, I can't find anything about defining this on the client.
I would like to avoid any "hacky" solutions like setting Kafka to deny any non-whitelisted IP addresses or DNS tomfoolery.
Thanks in advance!
Just to be sure.., Do you know if your server is doing interface bonding (means the traffic will load balance between each interface, though, it's unlikely to do binding on different speed interfaces..)?
If not, as your two interfaces are on the same network, it means you will only use one interface to reach the network (except if you have exotic routing config). This interface will be defined by your default route.
If it's a Linux server, you can do as follows :
ip route
default via X.X.X.X dev YOURDEFAULTINTERFACE
If it's the 10G, you have nothing to do and you can be sure it will use this interface.
If not, you cannot do anything Kafka side, as it's purely OS settings side. Your Kernel will forward any traffic through this default interface.
Again.. I insist on the fact that this is because both your interfaces are in the same network.
If you have any doubts with this, please share your network configuration in details ( result of ip addr and ip route)
Yannick
For example; I have 3 nifi nodes in nifi cluster. Example hostnames of these nodes;
192.168.12.50:8080(primary)
192.168.54.60:8080
192.168.95.70:8080
I know that I can access to nifi-rest api from all nifi nodes. I have GetHTTP processor for get cluster summary from rest-api, and this processor runs on only pimary node. I did set "URL" property of this processor to 192.168.12.50:8080/nifi-api/controller/cluster.
But, if primary node is down, new primary node will be elected. Thus, I will not be able to access 192.168.12.50:8080 address from new primary node. Because this node was down. So, I will not be able to get cluster summary result from rest-api.
In this case, Can I use "localhost:8080/nifi-api/controller/cluster" instead of "192.168.12.50:8080/nifi-api/controller/cluster" for each node in nifi cluster?
It depends on a few things... if you are running securely then you have certificates that are generated for each node specific to the hostname, so the host in the web requests needs to match the host in the certificates, so you can't use localhost in that case.
It also depends how NiFi's web server is configured. If nifi.web.http.host or nifi.web.https.host has a specific hostname specified, then the web server is only bound to that hostname and may not accept connections with a different hostname. In a default unsecure setup, if you leave nifi.web.http.host blank then it binds to all interfaces.
You may be able to use the expression language function to obtain the hostname of the current node. So you could make the url something like "http://${hostname()}/nifi-api/controller/cluster".
I have searched before writing this ... All i found is at certain point they are using load balancer hardware or software. But the thing i need is without hardware and Software can we do the load balancing ?.
While i was searching for this i came across the below statement.
"Another way to distribute requests is to have a single virtual IP (VIP) that all clients use. And for the computer on that 'virtual' IP to forward the request to the real servers"
Could you please anyone let me know how to do the Virtual IP load balancing?.
I have searched lots of article but i could not find anything related to VIP configuration or setup. All i found is only theoretical materials.
I need to divide the incoming requests into two applications. In this case both application server should be up and running.
Below is the architecture:
Application Node 1 : 10.66.204.10
Application Node 2 : 10.66.204.11
Virtual IP: 10.66.204.104
Run an instance of Nginx and use it as a load balancing Gateway for connections. There's no difference using virtual IPs to actual IPs - although it helps if your cloud setup is on LAN based IPs for both security and ease.
Depending on your setup there's two paths to go:
Dynamically assign connections to a server. This can be done on a split (evenly distributed) or on one instance until it fills up - then overflow.
Each function has it's own IP assigned. For example, you can configure the Gateway to serve static content itself and request dynamic content from other servers.
Configuring Nginx is a large job. However, it's a relatively well documented process and it shouldn't be hard for you to find a guide that suits your needs.
I have a question about the clustering respectively the reconnection in the clustering in Elasticsearch.
I have 2 Elasticsearch-Server on 2 different servers within a network. Both Elasticsearch's are in the same cluster.
In an error scenario the network connection could be broken. I simulate this behaviour while pulling the network cable on one server.
After reconnecting the server to the network the clustering won't be working. When I put some data to one Elasticsearch, the data would not be transferred to the other Elasticsearch.
Does anybody know if there are some settings about the reconnecting?
Best Regards
Thomas
Why dont just put all Elasticsearch servers behind the load balancer with single DNS name, there could be issue in server which go down and need manual intervention , after correcting problem in server it will be available under load balancer automatically.
Did you check if all nodes join the cluster again?
You may want to try following APIs:
Check nodes status
http://es-host:9200/_nodes
Check cluster status
http://es-host:9200/_cluster/health