Does anyone know if Google’s HTTPS loadbalancer is working?
I was working on setting up a NGINX ingress service but I noticed the Google Loadbalancer was automatically being setup by Kubernetes. I was getting two external IPs instead of one. So instead of setting up the NGINX load balancer I decided to use the Google service. I deleted my container cluster, created a brand new one. I started my HTTP pod and HTTP service on port 80. I then created my ingress service and L7 controller pod. Now I'm getting the following error when I review the load balancer logs:
Event(api.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"echomap", UID:"9943e74c-76de-11e6-8c50-42010af0009b",
APIVersion:"extensions", ResourceVersion:"7935", FieldPath:""}): type:
'Warning' reason: 'GCE' googleapi: Error 400: Validation failed for
instance
'projects/mundolytics/zones/us-east1-c/instances/gke-airportal-default-pool-7753c577-129e':
instance may belong to at most one load-balanced instance group.,
instanceInMultipleLoadBalancedIgs
Probably you have one or more hanging backend services. Run gcloud compute backend-services list to find them and then gcloud compute backend-services delete [SERVICE-NAME] for each service to remove it.
$ gcloud compute backend-services list
NAME BACKENDS PROTOCOL
my-hanging-service us-central1-a/instanceGroups/gke-XXXXXXX-default-pool-XXXXXXX-grp HTTP
$ gcloud compute backend-services delete my-hanging-service
Related
Description
Hello, I have been following a tutorial that sets up my own microservice in the cloud with go micro and kubernetes.
The tutorial has a kubernetes cluster as a prerequisite, so I followed another tutorial by the same author to create a kubernetes cluster.
To sum up the tutorials so you may get the big picture:
I first used Hetzner Cloud to by some machines on a remote location so I can deploy my Rancher server there. Rancher is a UI tool for creating and managing a kubernetes cluster.
Therefore, I:
Bought a machine on Hetzner Cloud
Deployed my Rancher server there
Went to a public IP to log into Rancher
Made a kubernetes cluster with one master and one worker node.
Everything was successful there, I could download kube's .config and manipulate the cluster from the command line.
The next tutorial was on how you deploy go micro framework and your own helloworld microservice in the kubernetes cluster.
The tutorial walks you through deploying go micro's services first, and then shows you the deployment for your own microservice.
I managed to do everything and all of my services are up and running. There is just one problem, I can't log into the micro server with username: admin and password: micro.
Symptoms
What I can do:
I can list kubernetes pods with kubectl get pods -n micro
I can log into a particular pod (I logged into api like in tutorial) with kubectl exec -it -n micro {{pod}} -- bash
There I see the micro exec.
From there, the tutorial just said log in and execute ./micro services and it lists all the services, but I am unable to log in. When I try with the default "admin, micro" combination it says Invalid token provided.
I checked the jwt tokens in MICRO_AUTH_PRIVATE_KEY and MICRO_AUTH_PUBLIC_KEY and they match in every service.
I am able to create another user after which I get access denied to namespace error when trying to list the services. I am unable to create rules with that user.
Please help, this has been haunting me for days. 🙏🏽
I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).
I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container
I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.
I could create some VM instances, add them to an instance group; also created an HTTP health check, and a backend service using gcloud command in a GCE project using these guides:
https://cloud.google.com/sdk/gcloud/reference/compute/http-health-checks/create
https://cloud.google.com/sdk/gcloud/reference/compute/backend-services/create
However, I can't find the doc to create a frontend service which is required to create a balancer, and indeed, the doc for creating balancer is also not available on Google Cloud SDK Reference.
Is it real no way to use gcloud command to create frontend service and balancer?
Found it, it's called forwarding-rules, not frontend-services, rather confusing.
And forwarding rule won't point directly to a backend-service. Forwarding rule (global) points to Target HTTP Proxy, and Target HTTP Proxy needs a URL Map.
Reference:
https://cloud.google.com/sdk/gcloud/reference/compute/forwarding-rules/create
Credit to the answer of #eSniff here:
https://stackoverflow.com/a/28533614/5581893
I have set up a simple Kubernetes load balancer service in front of a Node.js container, which should be exposing port 80, but I can't get a response out of it. How can I debug how the load balancer is handling requests to port 80? Are there logs I can inspect?
I have set up a load balancer service and a replication controller as described in the Kubernetes guestbook example.
The service/load balancer spec is similar to this:
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"ports": [
{
"port":3000,
"targetPort":"http-server"
}
],
"selector":{
"app":"guestbook"
},
"type": "LoadBalancer"
}
}
As for my hosting platform, I'm using AWS and the OS is CoreOS alpha (976.0.0). Kubectl is at version 1.1.2.
Kubernetes Info
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get pods
NAME READY STATUS RESTARTS AGE
busybox-sleep 1/1 Running 0 18m
web-s0s5w 1/1 Running 0 12h
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.3.0.1 <none> 443/TCP <none> 1d
web 10.3.0.171
Here is the primary debugging document for Services:
http://kubernetes.io/docs/user-guide/debugging-services/
LoadBalancer creates an external resource. What exactly that resource is depends on your Cloud Provider - some of them don't support it at all (in this case, you might want to try NodePort instead).
Both Google and Amazon support external load balancers.
Overall, when asking these questions it's extremely helpful to know if you are running on Google Container Engine, Google Compute Engine, Amazon Web Services, Digital Ocean, Vagrant, or whatever, because the answer depends on that. Showing all your configs and all your existing Kubnernetes resources (kubectl get pods, kubectl get services) along with your Dockerfiles or which images you are using will also help.
For Google (GKE or GCE), you would verify the load balancer exists:
gcloud compute forwarding-rules list
The external load balancer will map port 80 to an arbitrary Node, but then the Kubernetes proxy will map that to an ephemeral port on the correct node that actually has a Pod with that label, then it will map to the container port. So you have to figure out which step along the way isn't working. Unfortunately all those kube-proxy and iptables jumps are quite difficult to follow, so usually I would first double check all my Pods exist and have labels that match the selector of the Service. I would double check that my container is exposing the right port, I am using the right name for the port, etc. You might want to create some other Pods that just make calls to the Service (using the environment variables or KubeDNS, see the Kubernetes service documentation if you don't know what I'm referring to) and verify it's accessible internally before debugging the load balancer.
Some other good debugging steps:
Verify that your Kubernetes Service exists:
kubectl get services
kubectl get pods
Check your logs of your pod
kubectl logs <pod name>
Check that your service is created internally by printing the environment variable for it
kubectl exec <pod name> -- printenv GUESTBOOK_SERVICE_HOST
try creating a new pod and see if the service can be reached internally through GUESTBOOK_SERVICE_HOST and GUESTBOOK_SERVICE_PORT.
kubectl describe pod <pod name>
will give the instance id of the pod, you can SSH to it and run Docker and verify your container is running, attach to it, etc. If you really want to get into the IP tables debugging, try
sudo iptables-save
The target port of the LoadBalancer needs to be the port INSIDE the container. So in my case I need to set the targetPort to 3000 instead of 80, on the LoadBalancer.
Even though on the pod itself I have already mapped port 80 to 3000.
This is very counter intuitive to me, and not mentioned in all the LoadBalancer docs.