Connect to hadoop kerberos from kubernetes pod - hadoop

What ports needs to be exposed to be able to connect to kerberos/hadoop from within a pod? Deployment's service is created with ClusterIP.

Related

Application deployed on kubernetes in aws ec2 instances is not accessible without nginx external ip port number

I have deployed microservice based applications in EC2 instances kubernetes set-up.
my web application is accessible if I add port no of external IP of ingress-nginx with url. but I want it to be accessible with out port no.
same deployment is working without port no in on-prem setup.
all ports are open in aws security settings.

Connecting to kubernetes cluster using helm go client

I am using helm go client to install the release. The link is
https://github.com/helm/helm/tree/55193089e67207363e6f448054113b5a36649d74/pkg/helm
What I didn't get , how can I give my kubeconfig file to this go client so that my client can communicate with the kubernetes cluster. I went through documentation but doesn't has the details on the same.
Is there any other way to connect to cluster from helm go client.

How do I allow a kubernetes cluster to access my ec2 machine?

I want to allow a kubernetes cluster, all the pods running in it, to access my ec2 machine.
This means I have to allow a particular IP or a range of IPs in the security group of my ec2 machine.
But what is that one IP or a range of IPs that I'd have to enter in the security group of EC2 machine?
The pods in kubernetes run in worker nodes which are nothing but ec2 instances and have their own security group. If you want your ec2 instance which is outside the cluster to accept connection from pods in kubernetes cluster, you can add an inbound rule in the ec2 instance with source security group value that of the worker nodes security group.
Why is that the pods in the kubernetes cluster wants to access an ec2 instance outside the cluster. You can also bring the ec2 instance within your kubernetes cluster and if need be, you can expose the ec2 instance's process via kubernetes service.

Kubernetes and Prometheus not working together with Grafana

I have created a kubernetes cluster on my local machine with one master and at the moment zero workers, using kubeadm as the bootstrap tool. I am trying to get Prometheus (from the helm packet manager) and Kuberntes matrics together to the Grafana Kubernetes App, but this is not the case. The way I am setting up the monitoring is:
Open grafana-server at port 3000 and install the kuberntes app.
Install stable/prometheus from helm and using this custom YAML file I found in another guide.
Adding Prometheus data source to Grafana with IP from kubernetes Prometheus service (or pods, tried both and both works well) and use TLS Client Auth.
Starting proxy port with kubectl proxy
Filling in all information needed in the Kubernetes Grafana app and then deploy it. No errors.
All kubernetes metric shows, but no Prometheus metric.
If kubernetes proxy connection is stopped, Prometheus metric can be seen. There are no problems connecting to the Prometheus pod or service IP when kubernetes proxy is running. Does someone have any clue what I am doing wrong?

tunnel or proxy from app in one kubernetes cluster (local/minikube) to a database inside a different kubernetes cluster (on Google Container Engine)

I have a large read-only elasticsearch database running in a kubernetes cluster on Google Container Engine, and am using minikube to run a local dev instance of my app.
Is there a way I can have my app connect to the cloud elasticsearch instance so that I don't have to create a local test database with a subset of the data?
The database contains sensitive information, so can't be visible outside it's own cluster or VPC.
My fall-back is to run kubectl port-forward inside the local pod:
kubectl --cluster=<gke-database-cluster-name> --token='<token from ~/.kube/config>' port-forward elasticsearch-pod 9200
but this seems suboptimal.
I'd use a ExternalName Service like
kind: Service
apiVersion: v1
metadata:
name: elastic-db
namespace: prod
spec:
type: ExternalName
externalName: your.elastic.endpoint.com
According to the docs
An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.
If you need to expose the elastic database, there are two ways of exposing applications to outside the cluster:
Creating a Service of type LoadBalancer, that would load balance the traffic for all instances of your elastic database. Once the Load Balancer is created on GKE, just add the load balancer's DNS as the value for the elastic-db ExternalName created above.
Using an Ingress controller. The Ingress controller will have an IP that is reachable from outside the cluster. Use that IP as ExternalName for the elastic-db created above.

Resources