I've deployed grafana to to an AWS EKS cluster and I want to be able to access it from a web browser, if I create a Kubernetes service type of LoadBalancer, based on the very limited AWS networking knowledge I have, I know that this maps to an elastic load balancer, I can get the name of this, go to network and security -> network interfaces and get all the interfaces associated with this, one for each EC2 instance. Presuming its the public ip address associated with each ELB network interface I need to arrange access in order to access my grafana service, and again my AWS networking knowledge is very lacking, what is the fastest and easiest way for me to make the grafana Kubernetes service accessible via my web browser.
The Easiest way to expose any app running on kubernetes is to create a ServiceType as LoadBalancer.
I myself use the same for some of the services to get the things quickly up and running.
To get the loadbalancer name I do
kubectl get svc
which will give me the loadbalancer FQDN. I then map it to a DNS.
The other way which I use is to deploy the nginx-ingress-controller.
https://kubernetes.github.io/ingress-nginx/deploy/#aws
This creates a ServiceType as LoadBalancer.
I then create the Ingress which will be mapped to the ingress controller elb.
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
I use this for all my apps running with one loadbalancer mapping it to one elb using the nginx-ingress-controller.
In my specific scenario, the solution was to open up the port for the LoadBalancer service via a port.
Related
I am trying to run an application in Kubernetes that will be accessed via ingress control from the outside world. The ingress will add a path '/applicationName' and for which I need to configure the reverse proxy setting on the application. What is the best way to handle this requirement in kubernetes ?
I have tried few workarounds like putting the cluster or almsmart-nginx-ingress-controller.my-app.svc.cluster.local to resolve IP etc but I am not convinced with the approach.
Any suggestions ? Thanks in Advance.
The path in your ingress should point to the service and its port and your DNS need to point to ingress IP. If you are running on the cloud infrastructure, then Ingress(one of the ingress implementations) will be behind Service of Type=LoadBalancer(so point your DNS to that), and then requests received by the Ingress will be forwarded to services, depending on the Host and Path in the request.
Here is a sample of a spec part of an ingress object:
spec: rules:
- host: first.example.com
http:
paths:
- path: /
backend:
serviceName: firstservice
servicePort: 80
- host: second.example.com
http:
paths:
- path: /
backend:
serviceName: secondservice
servicePort: 80
I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.
I have a large read-only elasticsearch database running in a kubernetes cluster on Google Container Engine, and am using minikube to run a local dev instance of my app.
Is there a way I can have my app connect to the cloud elasticsearch instance so that I don't have to create a local test database with a subset of the data?
The database contains sensitive information, so can't be visible outside it's own cluster or VPC.
My fall-back is to run kubectl port-forward inside the local pod:
kubectl --cluster=<gke-database-cluster-name> --token='<token from ~/.kube/config>' port-forward elasticsearch-pod 9200
but this seems suboptimal.
I'd use a ExternalName Service like
kind: Service
apiVersion: v1
metadata:
name: elastic-db
namespace: prod
spec:
type: ExternalName
externalName: your.elastic.endpoint.com
According to the docs
An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.
If you need to expose the elastic database, there are two ways of exposing applications to outside the cluster:
Creating a Service of type LoadBalancer, that would load balance the traffic for all instances of your elastic database. Once the Load Balancer is created on GKE, just add the load balancer's DNS as the value for the elastic-db ExternalName created above.
Using an Ingress controller. The Ingress controller will have an IP that is reachable from outside the cluster. Use that IP as ExternalName for the elastic-db created above.
I have set up a simple Kubernetes load balancer service in front of a Node.js container, which should be exposing port 80, but I can't get a response out of it. How can I debug how the load balancer is handling requests to port 80? Are there logs I can inspect?
I have set up a load balancer service and a replication controller as described in the Kubernetes guestbook example.
The service/load balancer spec is similar to this:
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"ports": [
{
"port":3000,
"targetPort":"http-server"
}
],
"selector":{
"app":"guestbook"
},
"type": "LoadBalancer"
}
}
As for my hosting platform, I'm using AWS and the OS is CoreOS alpha (976.0.0). Kubectl is at version 1.1.2.
Kubernetes Info
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get pods
NAME READY STATUS RESTARTS AGE
busybox-sleep 1/1 Running 0 18m
web-s0s5w 1/1 Running 0 12h
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.3.0.1 <none> 443/TCP <none> 1d
web 10.3.0.171
Here is the primary debugging document for Services:
http://kubernetes.io/docs/user-guide/debugging-services/
LoadBalancer creates an external resource. What exactly that resource is depends on your Cloud Provider - some of them don't support it at all (in this case, you might want to try NodePort instead).
Both Google and Amazon support external load balancers.
Overall, when asking these questions it's extremely helpful to know if you are running on Google Container Engine, Google Compute Engine, Amazon Web Services, Digital Ocean, Vagrant, or whatever, because the answer depends on that. Showing all your configs and all your existing Kubnernetes resources (kubectl get pods, kubectl get services) along with your Dockerfiles or which images you are using will also help.
For Google (GKE or GCE), you would verify the load balancer exists:
gcloud compute forwarding-rules list
The external load balancer will map port 80 to an arbitrary Node, but then the Kubernetes proxy will map that to an ephemeral port on the correct node that actually has a Pod with that label, then it will map to the container port. So you have to figure out which step along the way isn't working. Unfortunately all those kube-proxy and iptables jumps are quite difficult to follow, so usually I would first double check all my Pods exist and have labels that match the selector of the Service. I would double check that my container is exposing the right port, I am using the right name for the port, etc. You might want to create some other Pods that just make calls to the Service (using the environment variables or KubeDNS, see the Kubernetes service documentation if you don't know what I'm referring to) and verify it's accessible internally before debugging the load balancer.
Some other good debugging steps:
Verify that your Kubernetes Service exists:
kubectl get services
kubectl get pods
Check your logs of your pod
kubectl logs <pod name>
Check that your service is created internally by printing the environment variable for it
kubectl exec <pod name> -- printenv GUESTBOOK_SERVICE_HOST
try creating a new pod and see if the service can be reached internally through GUESTBOOK_SERVICE_HOST and GUESTBOOK_SERVICE_PORT.
kubectl describe pod <pod name>
will give the instance id of the pod, you can SSH to it and run Docker and verify your container is running, attach to it, etc. If you really want to get into the IP tables debugging, try
sudo iptables-save
The target port of the LoadBalancer needs to be the port INSIDE the container. So in my case I need to set the targetPort to 3000 instead of 80, on the LoadBalancer.
Even though on the pod itself I have already mapped port 80 to 3000.
This is very counter intuitive to me, and not mentioned in all the LoadBalancer docs.
I installed CentOS Atomic Host as operating system for kubernetes on AWS.
Everything works fine, but it seems I missed something.
I did not configure cloud provider and can not find any documentation on that.
In this question I want to know:
1. What features cloud provider gives to kubernetes?
2. How to configure AWS cloud provider?
UPD 1: external load balancer does not work; I have not tested awsElasticBlockStore yet, but I also suspect it does not work.
UPD 2:
Service details:
$ kubectl get svc nginx-service-aws-lb -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-01-02T09:51:40Z
name: nginx-service-aws-lb
namespace: default
resourceVersion: "74153"
selfLink: /api/v1/namespaces/default/services/nginx-service-aws-lb
uid: 6c28b718-b136-11e5-9bda-06c2feb29b0d
spec:
clusterIP: 10.254.172.185
ports:
- name: http-proxy-protocol
nodePort: 31385
port: 8080
protocol: TCP
targetPort: 8080
- name: https-proxy-protocol
nodePort: 31370
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I can't speak to the ProjectAtomic bits, nor to the KUBERNETES_PROVIDER env-var, since my experience has been with the CoreOS provisioner. I will talk about my experiences and see if that helps you dig a little more into your setup.
Foremost, it is absolutely essential that the controller EC2 and the worker EC2 machines have the correct IAM role that will enable the machines to make AWS calls on behalf of your account. This includes things like provisioning ELBs and working with EBS Volumes (or attaching an EBS Volume to themselves, in the case of the worker). Without that, your cloud-config experience will go nowhere. I'm pretty sure the IAM payloads are defined somewhere other than those .go files, which are hard to read, but that's the quickest link I had handy to show what's needed.
Fortunately, the answer to that question, and the one I'm about to talk about, are both centered around the apiserver and the controller-manager. The configuration of them and the logs they output.
Both the apiserver and the controller-manager have an argument that points to an on-disk cloud configuration file that regrettably isn't documented anywhere except for the source. That Zone field is, in my experience, optional (just like they say in the comments). However, it was seeing the KubernetesClusterTag that led me to follow that field around in the code to see what it does.
If your experience is anything like mine, you'll see in the docker logs of the controller-manager a bunch of error messages about how it created the ELB but could not find any subnets to attach to it; (that "docker logs" bit is presuming, of course, that ProjectAtomic also uses docker to run the Kubernetes daemons).
Once I attached a Tag named KubernetesCluster and set every instance of the Tag to the same string (it can be anything, AFAIK), then the aws_loadbalancer was able to find the subnet in the VPC and it attached the Nodes to the ELB and everything was cool -- except for the part about it can only create Internet facing ELBs, right now. :-(
Just for clarity: the aws.cfg contains a field named KubernetesClusterTag that allows you to redefine the Tag that Kubernetes will look for; without any value in that file, Kuberenetes will use the Tag name KubernetesCluster.
I hope this helps you and I hope it helps others, because once Kubernetes is up, it's absolutely amazing.
What features cloud provider gives to kubernetes?
Some features that I know: the external loadbalancer, the persistent volumes.
How to configure AWS cloud provider?
There is a environment var called KUBERNETES_PROVIDER, but it seems the env var only matters when people start a k8s cluster. Since you said "everything works fine", I guess you don't need any further configuration to use the features I mentioned above.