I am trying to run an application in Kubernetes that will be accessed via ingress control from the outside world. The ingress will add a path '/applicationName' and for which I need to configure the reverse proxy setting on the application. What is the best way to handle this requirement in kubernetes ?
I have tried few workarounds like putting the cluster or almsmart-nginx-ingress-controller.my-app.svc.cluster.local to resolve IP etc but I am not convinced with the approach.
Any suggestions ? Thanks in Advance.
The path in your ingress should point to the service and its port and your DNS need to point to ingress IP. If you are running on the cloud infrastructure, then Ingress(one of the ingress implementations) will be behind Service of Type=LoadBalancer(so point your DNS to that), and then requests received by the Ingress will be forwarded to services, depending on the Host and Path in the request.
Here is a sample of a spec part of an ingress object:
spec: rules:
- host: first.example.com
http:
paths:
- path: /
backend:
serviceName: firstservice
servicePort: 80
- host: second.example.com
http:
paths:
- path: /
backend:
serviceName: secondservice
servicePort: 80
Related
I've deployed grafana to to an AWS EKS cluster and I want to be able to access it from a web browser, if I create a Kubernetes service type of LoadBalancer, based on the very limited AWS networking knowledge I have, I know that this maps to an elastic load balancer, I can get the name of this, go to network and security -> network interfaces and get all the interfaces associated with this, one for each EC2 instance. Presuming its the public ip address associated with each ELB network interface I need to arrange access in order to access my grafana service, and again my AWS networking knowledge is very lacking, what is the fastest and easiest way for me to make the grafana Kubernetes service accessible via my web browser.
The Easiest way to expose any app running on kubernetes is to create a ServiceType as LoadBalancer.
I myself use the same for some of the services to get the things quickly up and running.
To get the loadbalancer name I do
kubectl get svc
which will give me the loadbalancer FQDN. I then map it to a DNS.
The other way which I use is to deploy the nginx-ingress-controller.
https://kubernetes.github.io/ingress-nginx/deploy/#aws
This creates a ServiceType as LoadBalancer.
I then create the Ingress which will be mapped to the ingress controller elb.
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
I use this for all my apps running with one loadbalancer mapping it to one elb using the nginx-ingress-controller.
In my specific scenario, the solution was to open up the port for the LoadBalancer service via a port.
I'm trying to set HTTPS-LB, which is not allow to use HTTP, on GKE with Ingress.
Now, as described in a official site, I deployed a simple application on a private cluster. This application works that can be accessed with a browser.
※both by http-connection and https-connection
Then I prohibit a http-access to the application by turning a frontend's protocol "http" off (deletion) in a setting of the LB.
Actually, at first, a http-connection via the browser got error, not connection-error. After a 5-10 mins, http-protocol setting is restored automatically.
Here is a yaml file.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-name
annotations:
# kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: "ip-name"
spec:
#tls:
#This assumes tls-secret exists.
#- hosts:
# - XXXXXXXX.XXX.XXX
# secretName: ip-secret ← no use because of google-managed-ssl
rules:
- http:
paths:
# to app
- path: /*
backend:
serviceName: XXXXX-backend
servicePort: 80
# to DS Export
- path: /backend/*
backend:
serviceName: XXXXX-be-backend
servicePort: 80
Is this problem due to a browser? or an internal settings like http-health-checker on gce instances?
If you could confirm which tutorial you’re following, we can confirm the test. Saying this, i think the behavior you are seeing could be expected.
According to GKE Ingress doc , it states:
"Whenever an HTTP(S) load balancer is configured through Ingress, you must not manually change or update the configuration of the HTTP(S) load balancer. That is, you must not edit any of the load balancer's components, including target proxies, URL maps, and backend services. Any changes that you make will be overwritten by GKE. "
You can try to delete the ingress, do the manual edit to your YAML file and recreate it again and see if the removal of HTTP works.
I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.
I have a large read-only elasticsearch database running in a kubernetes cluster on Google Container Engine, and am using minikube to run a local dev instance of my app.
Is there a way I can have my app connect to the cloud elasticsearch instance so that I don't have to create a local test database with a subset of the data?
The database contains sensitive information, so can't be visible outside it's own cluster or VPC.
My fall-back is to run kubectl port-forward inside the local pod:
kubectl --cluster=<gke-database-cluster-name> --token='<token from ~/.kube/config>' port-forward elasticsearch-pod 9200
but this seems suboptimal.
I'd use a ExternalName Service like
kind: Service
apiVersion: v1
metadata:
name: elastic-db
namespace: prod
spec:
type: ExternalName
externalName: your.elastic.endpoint.com
According to the docs
An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.
If you need to expose the elastic database, there are two ways of exposing applications to outside the cluster:
Creating a Service of type LoadBalancer, that would load balance the traffic for all instances of your elastic database. Once the Load Balancer is created on GKE, just add the load balancer's DNS as the value for the elastic-db ExternalName created above.
Using an Ingress controller. The Ingress controller will have an IP that is reachable from outside the cluster. Use that IP as ExternalName for the elastic-db created above.
I installed CentOS Atomic Host as operating system for kubernetes on AWS.
Everything works fine, but it seems I missed something.
I did not configure cloud provider and can not find any documentation on that.
In this question I want to know:
1. What features cloud provider gives to kubernetes?
2. How to configure AWS cloud provider?
UPD 1: external load balancer does not work; I have not tested awsElasticBlockStore yet, but I also suspect it does not work.
UPD 2:
Service details:
$ kubectl get svc nginx-service-aws-lb -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-01-02T09:51:40Z
name: nginx-service-aws-lb
namespace: default
resourceVersion: "74153"
selfLink: /api/v1/namespaces/default/services/nginx-service-aws-lb
uid: 6c28b718-b136-11e5-9bda-06c2feb29b0d
spec:
clusterIP: 10.254.172.185
ports:
- name: http-proxy-protocol
nodePort: 31385
port: 8080
protocol: TCP
targetPort: 8080
- name: https-proxy-protocol
nodePort: 31370
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I can't speak to the ProjectAtomic bits, nor to the KUBERNETES_PROVIDER env-var, since my experience has been with the CoreOS provisioner. I will talk about my experiences and see if that helps you dig a little more into your setup.
Foremost, it is absolutely essential that the controller EC2 and the worker EC2 machines have the correct IAM role that will enable the machines to make AWS calls on behalf of your account. This includes things like provisioning ELBs and working with EBS Volumes (or attaching an EBS Volume to themselves, in the case of the worker). Without that, your cloud-config experience will go nowhere. I'm pretty sure the IAM payloads are defined somewhere other than those .go files, which are hard to read, but that's the quickest link I had handy to show what's needed.
Fortunately, the answer to that question, and the one I'm about to talk about, are both centered around the apiserver and the controller-manager. The configuration of them and the logs they output.
Both the apiserver and the controller-manager have an argument that points to an on-disk cloud configuration file that regrettably isn't documented anywhere except for the source. That Zone field is, in my experience, optional (just like they say in the comments). However, it was seeing the KubernetesClusterTag that led me to follow that field around in the code to see what it does.
If your experience is anything like mine, you'll see in the docker logs of the controller-manager a bunch of error messages about how it created the ELB but could not find any subnets to attach to it; (that "docker logs" bit is presuming, of course, that ProjectAtomic also uses docker to run the Kubernetes daemons).
Once I attached a Tag named KubernetesCluster and set every instance of the Tag to the same string (it can be anything, AFAIK), then the aws_loadbalancer was able to find the subnet in the VPC and it attached the Nodes to the ELB and everything was cool -- except for the part about it can only create Internet facing ELBs, right now. :-(
Just for clarity: the aws.cfg contains a field named KubernetesClusterTag that allows you to redefine the Tag that Kubernetes will look for; without any value in that file, Kuberenetes will use the Tag name KubernetesCluster.
I hope this helps you and I hope it helps others, because once Kubernetes is up, it's absolutely amazing.
What features cloud provider gives to kubernetes?
Some features that I know: the external loadbalancer, the persistent volumes.
How to configure AWS cloud provider?
There is a environment var called KUBERNETES_PROVIDER, but it seems the env var only matters when people start a k8s cluster. Since you said "everything works fine", I guess you don't need any further configuration to use the features I mentioned above.