Using Kubernetes 1.12.6-gke.7 or higher it is possible to create a ManagedCertificate which is then referenced from an Ingress Resource exposing a Service to the Internet.
Running kubectl describe managedcertificate certificate-name first indicates the certificate is in a Provisioning state but eventually goes to FailedNotVisible.
Despite using a Static IP and DNS that resolves fine to the http version of said service all ManagedCertificate's end up in a "Status: FailedNotVisible" state.
Outline of what I am doing:
Generating a reserved (static) external IP Address
Configuring DNS A record in CloudDNS to subdomain.domain.com to generated IP address from step 1.
Creating a ManagedCertificate named "subdomain-domain-certificate" with kubectl apply -f with spec:domains containing a single domain corresponding to subdomain.domain.com DNS record in step 2.
Creating a simple deployment and service exposing it
Creating Ingress resource referring to default backend of service in step 4 as well as annotations for static ip created in step 1 and managed certificate generated in step 3.
Confirm that Ingress is created and is assigned static IP
Visiting http://subdomain.domain.com serves the output from pod created in deployment in step 4
After a little while
kubectl describe managedcertificate subdomain-domain-certificate
results in "Status: FailedNotVisible".
Name: subdomain-domain-certificate
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-04-15T17:35:22Z
Generation: 1
Resource Version: 52637
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/subdomain-domain-certificate
UID: d8e5a0a4-5fa4-11e9-984e-42010a84001c
Spec:
Domains:
subdomain.domain.com
Status:
Certificate Name: mcrt-ac63730e-c271-4826-9154-c198d654f9f8
Certificate Status: Provisioning
Domain Status:
Domain: subdomain.domain.com
Status: FailedNotVisible
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Create 56m managed-certificate-controller Create SslCertificate mcrt-ac63730e-c271-4826-9154-c198d654f9f8
From what I understand if the Load Balancer is configured correctly (done under the hood in the ManagedCertificate resource) and the DNS (which resolves fine to the non https endpoint) checks out the certificate should go in to a Status: Active state?
The issue underlying my problem ended up being a DNSSEC misconfiguration. After running the DNS through https://dnssec-analyzer.verisignlabs.com/ I was able to identify and fix the issue.
DNSSEC was indeed not enabled for my domain but after configuring that, the ManagedCertificate configuration was still not going through and I had no clue what was going on. Deleting and re-applying the ManagedCertificate and Ingress manifests did not do the trick. But issuing the command gcloud beta compute ssl-certificates list showed several unused managed certificates hanging around and deleting them with cloud compute ssl-certificates delete NAME ..., and then restarting the configuration process did the trick in my case.
You need to make sure the domain name resolves to the IP address of your GKE Ingress, following the directions for "creating an Ingress with a managed certificate" exactly.
For more details, see the Google Cloud Load Balancing documentation. From https://cloud.google.com/load-balancing/docs/ssl-certificates#domain-status:
"The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer."
I just ran into this problem when I was setting up a new service and my allowance of 8 external IPs was used up.
Following the trouble shooting guide, I checked whether there was a forwarding rule for port 443 to my ingress.
There wasn't.
When I tried to set it up manually, I got an error telling me I used up my 8 magic addresses.
I deleted forwarding rules I didn't need et voila!
Now, why the forwarding rule for port 80 was successfully set up for the same ingress is beyond me.
I ran across this same error and found that I had created the managedCertificate in the wrong Kubernetes namespace. Once the managedCertificate was placed in the correct namespace everything worked.
After reading the trouble shooting guide, I still wasn't able to resolve my issue. When I checked the GCP ingress events, it showed that the ingress could not locate the SSL policy. Check if you missed something when creating the ingress.
and this is another reference useful to verify your k8s manifests to set up the managed certificate and ingress. Hope it helps someone.
Related
I am trying to enable HTTPS on my Istio Ingress Gateway after installing the service mesh, gateway, and applying a routing policy. The initial Istio installation was done using a profile which includes an istio-ingressgateway service. When I do it this way, it creates the ingress gateway as a Kind: Service instead of a Kind: Gateway.
I looked at this: https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/
But, the tutorial only describes how to apply the certificate to a Gateway kind and not a Service kind.
What is the proper way to apply the SSL certificate to an ingress gateway service or is there a better way to approach this?
Istio Profile YAML
Thanks for your help!
EDIT: Problem Solved.
I went back through the tutorial last night after going down the path of trying to create a clusterIssuer and installing cert manager etc with poor results (The certificate never got accepted by the Certificate Authority for some reason so I only had the key file and an empty cert file). It ended up being easier to create my own certificate.
The issue was that I was referencing the TLS port in my virtual service when I only needed to point towards the port of the service where I was trying to send traffic from the gateway.
This article helped me understand better: Secure Ingress -Istio By Example along with the official Istio Secure-Ingress tutorial I linked above already.
From there I just created a new secret, ran a script that creates a working certificate (basically just a bash script that follows the steps from the Istio tutorial), and then made sure the credential name in my gateway file matched the new secret I created.
when you deployed the istio setup, it will create
kind: Service, istio-ingressgateway
kind: deployemnt , istio-ingressgateway
then you can create the below with https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/, this will configure your ssl.
kind: Secret, in namespace: istio-system
kind: gateway, with the above secrets in it referred.
kind: Virtual Service, linked to this gateway , and dest. application.
Some concepts are slightly confused:
The Gateway custom resource will configure the istio-ingressgateway, meanwhile
The Kubernetes Service will create an externally accessible IP.
Using the externally accessible IP, the traffic will be sent to the istio-ingressgateway, where your certificates are configured using the Gateway CR and you will have an HTTPS connection.
I recommend you to simply follow the below mentioned steps -
Install cert-manager from here using the steps those are helm chart based
The you can follow this stackoverflow post
Note that - you need not create the tls secret here, cert-manager will auto create the secret by name mentioned in your certificate, cert-manager will carryout acme challenge once you patch the secret name to TLS and once it gets successful, the certificate acquires ready state.
use
cert-manager.io/v1alpha2
this api version in cluster issuer, if the one mentioned there only is not acceptable
Does anyone know if Google’s HTTPS loadbalancer is working?
I was working on setting up a NGINX ingress service but I noticed the Google Loadbalancer was automatically being setup by Kubernetes. I was getting two external IPs instead of one. So instead of setting up the NGINX load balancer I decided to use the Google service. I deleted my container cluster, created a brand new one. I started my HTTP pod and HTTP service on port 80. I then created my ingress service and L7 controller pod. Now I'm getting the following error when I review the load balancer logs:
Event(api.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"echomap", UID:"9943e74c-76de-11e6-8c50-42010af0009b",
APIVersion:"extensions", ResourceVersion:"7935", FieldPath:""}): type:
'Warning' reason: 'GCE' googleapi: Error 400: Validation failed for
instance
'projects/mundolytics/zones/us-east1-c/instances/gke-airportal-default-pool-7753c577-129e':
instance may belong to at most one load-balanced instance group.,
instanceInMultipleLoadBalancedIgs
Probably you have one or more hanging backend services. Run gcloud compute backend-services list to find them and then gcloud compute backend-services delete [SERVICE-NAME] for each service to remove it.
$ gcloud compute backend-services list
NAME BACKENDS PROTOCOL
my-hanging-service us-central1-a/instanceGroups/gke-XXXXXXX-default-pool-XXXXXXX-grp HTTP
$ gcloud compute backend-services delete my-hanging-service
I have set up a simple Kubernetes load balancer service in front of a Node.js container, which should be exposing port 80, but I can't get a response out of it. How can I debug how the load balancer is handling requests to port 80? Are there logs I can inspect?
I have set up a load balancer service and a replication controller as described in the Kubernetes guestbook example.
The service/load balancer spec is similar to this:
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"ports": [
{
"port":3000,
"targetPort":"http-server"
}
],
"selector":{
"app":"guestbook"
},
"type": "LoadBalancer"
}
}
As for my hosting platform, I'm using AWS and the OS is CoreOS alpha (976.0.0). Kubectl is at version 1.1.2.
Kubernetes Info
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get pods
NAME READY STATUS RESTARTS AGE
busybox-sleep 1/1 Running 0 18m
web-s0s5w 1/1 Running 0 12h
$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.3.0.1 <none> 443/TCP <none> 1d
web 10.3.0.171
Here is the primary debugging document for Services:
http://kubernetes.io/docs/user-guide/debugging-services/
LoadBalancer creates an external resource. What exactly that resource is depends on your Cloud Provider - some of them don't support it at all (in this case, you might want to try NodePort instead).
Both Google and Amazon support external load balancers.
Overall, when asking these questions it's extremely helpful to know if you are running on Google Container Engine, Google Compute Engine, Amazon Web Services, Digital Ocean, Vagrant, or whatever, because the answer depends on that. Showing all your configs and all your existing Kubnernetes resources (kubectl get pods, kubectl get services) along with your Dockerfiles or which images you are using will also help.
For Google (GKE or GCE), you would verify the load balancer exists:
gcloud compute forwarding-rules list
The external load balancer will map port 80 to an arbitrary Node, but then the Kubernetes proxy will map that to an ephemeral port on the correct node that actually has a Pod with that label, then it will map to the container port. So you have to figure out which step along the way isn't working. Unfortunately all those kube-proxy and iptables jumps are quite difficult to follow, so usually I would first double check all my Pods exist and have labels that match the selector of the Service. I would double check that my container is exposing the right port, I am using the right name for the port, etc. You might want to create some other Pods that just make calls to the Service (using the environment variables or KubeDNS, see the Kubernetes service documentation if you don't know what I'm referring to) and verify it's accessible internally before debugging the load balancer.
Some other good debugging steps:
Verify that your Kubernetes Service exists:
kubectl get services
kubectl get pods
Check your logs of your pod
kubectl logs <pod name>
Check that your service is created internally by printing the environment variable for it
kubectl exec <pod name> -- printenv GUESTBOOK_SERVICE_HOST
try creating a new pod and see if the service can be reached internally through GUESTBOOK_SERVICE_HOST and GUESTBOOK_SERVICE_PORT.
kubectl describe pod <pod name>
will give the instance id of the pod, you can SSH to it and run Docker and verify your container is running, attach to it, etc. If you really want to get into the IP tables debugging, try
sudo iptables-save
The target port of the LoadBalancer needs to be the port INSIDE the container. So in my case I need to set the targetPort to 3000 instead of 80, on the LoadBalancer.
Even though on the pod itself I have already mapped port 80 to 3000.
This is very counter intuitive to me, and not mentioned in all the LoadBalancer docs.
I installed CentOS Atomic Host as operating system for kubernetes on AWS.
Everything works fine, but it seems I missed something.
I did not configure cloud provider and can not find any documentation on that.
In this question I want to know:
1. What features cloud provider gives to kubernetes?
2. How to configure AWS cloud provider?
UPD 1: external load balancer does not work; I have not tested awsElasticBlockStore yet, but I also suspect it does not work.
UPD 2:
Service details:
$ kubectl get svc nginx-service-aws-lb -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-01-02T09:51:40Z
name: nginx-service-aws-lb
namespace: default
resourceVersion: "74153"
selfLink: /api/v1/namespaces/default/services/nginx-service-aws-lb
uid: 6c28b718-b136-11e5-9bda-06c2feb29b0d
spec:
clusterIP: 10.254.172.185
ports:
- name: http-proxy-protocol
nodePort: 31385
port: 8080
protocol: TCP
targetPort: 8080
- name: https-proxy-protocol
nodePort: 31370
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I can't speak to the ProjectAtomic bits, nor to the KUBERNETES_PROVIDER env-var, since my experience has been with the CoreOS provisioner. I will talk about my experiences and see if that helps you dig a little more into your setup.
Foremost, it is absolutely essential that the controller EC2 and the worker EC2 machines have the correct IAM role that will enable the machines to make AWS calls on behalf of your account. This includes things like provisioning ELBs and working with EBS Volumes (or attaching an EBS Volume to themselves, in the case of the worker). Without that, your cloud-config experience will go nowhere. I'm pretty sure the IAM payloads are defined somewhere other than those .go files, which are hard to read, but that's the quickest link I had handy to show what's needed.
Fortunately, the answer to that question, and the one I'm about to talk about, are both centered around the apiserver and the controller-manager. The configuration of them and the logs they output.
Both the apiserver and the controller-manager have an argument that points to an on-disk cloud configuration file that regrettably isn't documented anywhere except for the source. That Zone field is, in my experience, optional (just like they say in the comments). However, it was seeing the KubernetesClusterTag that led me to follow that field around in the code to see what it does.
If your experience is anything like mine, you'll see in the docker logs of the controller-manager a bunch of error messages about how it created the ELB but could not find any subnets to attach to it; (that "docker logs" bit is presuming, of course, that ProjectAtomic also uses docker to run the Kubernetes daemons).
Once I attached a Tag named KubernetesCluster and set every instance of the Tag to the same string (it can be anything, AFAIK), then the aws_loadbalancer was able to find the subnet in the VPC and it attached the Nodes to the ELB and everything was cool -- except for the part about it can only create Internet facing ELBs, right now. :-(
Just for clarity: the aws.cfg contains a field named KubernetesClusterTag that allows you to redefine the Tag that Kubernetes will look for; without any value in that file, Kuberenetes will use the Tag name KubernetesCluster.
I hope this helps you and I hope it helps others, because once Kubernetes is up, it's absolutely amazing.
What features cloud provider gives to kubernetes?
Some features that I know: the external loadbalancer, the persistent volumes.
How to configure AWS cloud provider?
There is a environment var called KUBERNETES_PROVIDER, but it seems the env var only matters when people start a k8s cluster. Since you said "everything works fine", I guess you don't need any further configuration to use the features I mentioned above.
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
The cluster is getting automatically deleted by using these configs to create the cluster.
From https://github.com/kubernetes/kubernetes/issues/11435 the solution is to remove
kubernetes.io/cluster-service: "true"
Though without these the elasticsearch is not available through the kubernetes master.
Should i create a pull request to remove the line from the files in the repo so people dont get confused?
Firstly, I'd recommend reformatting future questions so they adhere to the stack overflow guidelines: https://stackoverflow.com/help/how-to-ask.
I'd recommend making Elasticsearch a normal Kubernetes Service. You can expose in one of the following ways:
1. Set service.Type = NodePort and access it via any public ip of node:nodePort
2. Set service.Type = LoadBalancer, this will only work on cloud providers that have loadbalancers
3. Expose the RC directly through a host port (not recommended)
Those are just the common options for accessing a Service, please see the following thread for a more detailed discussion: https://groups.google.com/forum/#!topic/kubernetes-sig-network/B-A_RuqpFWk
It's generally not a good idead to send all external traffic meant for a Kubernetes service through the apiserver. However if you must do so, you can via an endpoint such as:
/api/v1/proxy/namespaces/default/services/nginx:80/
Where default is the namespace, nginx is the name of your service and 80 is the service port (needed to disambiguate multiport services).