I am trying to enable HTTPS on my Istio Ingress Gateway after installing the service mesh, gateway, and applying a routing policy. The initial Istio installation was done using a profile which includes an istio-ingressgateway service. When I do it this way, it creates the ingress gateway as a Kind: Service instead of a Kind: Gateway.
I looked at this: https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/
But, the tutorial only describes how to apply the certificate to a Gateway kind and not a Service kind.
What is the proper way to apply the SSL certificate to an ingress gateway service or is there a better way to approach this?
Istio Profile YAML
Thanks for your help!
EDIT: Problem Solved.
I went back through the tutorial last night after going down the path of trying to create a clusterIssuer and installing cert manager etc with poor results (The certificate never got accepted by the Certificate Authority for some reason so I only had the key file and an empty cert file). It ended up being easier to create my own certificate.
The issue was that I was referencing the TLS port in my virtual service when I only needed to point towards the port of the service where I was trying to send traffic from the gateway.
This article helped me understand better: Secure Ingress -Istio By Example along with the official Istio Secure-Ingress tutorial I linked above already.
From there I just created a new secret, ran a script that creates a working certificate (basically just a bash script that follows the steps from the Istio tutorial), and then made sure the credential name in my gateway file matched the new secret I created.
when you deployed the istio setup, it will create
kind: Service, istio-ingressgateway
kind: deployemnt , istio-ingressgateway
then you can create the below with https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/, this will configure your ssl.
kind: Secret, in namespace: istio-system
kind: gateway, with the above secrets in it referred.
kind: Virtual Service, linked to this gateway , and dest. application.
Some concepts are slightly confused:
The Gateway custom resource will configure the istio-ingressgateway, meanwhile
The Kubernetes Service will create an externally accessible IP.
Using the externally accessible IP, the traffic will be sent to the istio-ingressgateway, where your certificates are configured using the Gateway CR and you will have an HTTPS connection.
I recommend you to simply follow the below mentioned steps -
Install cert-manager from here using the steps those are helm chart based
The you can follow this stackoverflow post
Note that - you need not create the tls secret here, cert-manager will auto create the secret by name mentioned in your certificate, cert-manager will carryout acme challenge once you patch the secret name to TLS and once it gets successful, the certificate acquires ready state.
use
cert-manager.io/v1alpha2
this api version in cluster issuer, if the one mentioned there only is not acceptable
Related
I open Visual Studio 2019 and create a new project (Container application for kubernetes). I tick enable https support and then when I start debugging in Visual Studio; I can browse to the https address.
I then try to go one step further. I have Kubernetes enabled in Docker Desktop on my development PC and follow these instructions (after opening all the .yaml files and changing all references of https to http and all references of port 80 to port 443):
1) cd C:\mvcsecure
2) docker build -t mvcsecure:stable -f c:\mvcsecure\mvcsecure\Dockerfile .
3) cd c:\mvcsecure\mvcsecure\charts
4) helm install mvcsecure ./mvcsecure/
5) kubectl expose deployment mvcsecure --type=NodePort --name=mvcsecure-service
6) kubectl get service
mvcsecure-service NodePort 10.96.128.133 <none> 443:31577/TCP 6s
7) I then try to browse to: https://localhost:31577 and it says:
Cannot securely connect to this page
Notice there is no option to trust a certificate or anything.
What changes must I make to the default Helm charts created by Visual Studio to get https working on my basic service? I cannot find any documentation or examples online. It would be great to see an example of a https service (mvc or api) deployed to Kubernetes using Helm. I could post the .yaml file code if needed,, however there is a lot of it.
I am wanting to use kubernetes cluster root certificate as described here: How to access a kubernetes service through https?
I have checked that all TLS and SSL options are ticked in Internet Options.
In case when Your application accepts HTTP traffic and You want to make is secure (HTTPS); I suggest to try TLS termination with kubernetes ingress.
Kubernetes documentation has great explanation how to configure TLS termination. With ingress object You can make Your HTTP service be accessible via HTTPS from outside of the cluster.
This means that connections made to service will be made in HTTPS and get decrypted to HTTP once internally in Your cluster before reaching the service.
Hope it helps.
I have done the setup of elastic stack using search guard plugin in a kubernetes environment. Now, I want to replace these demo certificates with my own certificate. And also I want the search guard kibana dashboard to be exposed through the ingress proxy with a secured SSL/TLS connection. For the time being, even I am able to use the Ingress certificate as searchguard node, admin and REST certificates then it will be fine for me. I don't want different certificate for nodes, admin and REST. How can I achieve this? I tried by updating the kubernetes secrets but I am not sure whether the running pod mounts the updated secret without a pod restart. But when I am doing pod restart the Pods never come back into running stage. What is the right way to achieve this? Can someone please provide detailed steps?
I recommend to have a look how we do it in our helm charts: https://github.com/floragunncom/search-guard-helm
That said you can have the same certificates for nodes and REST but the admin certificvate needs to be a different one. If you update the certs you must restart the pod.
Using Kubernetes 1.12.6-gke.7 or higher it is possible to create a ManagedCertificate which is then referenced from an Ingress Resource exposing a Service to the Internet.
Running kubectl describe managedcertificate certificate-name first indicates the certificate is in a Provisioning state but eventually goes to FailedNotVisible.
Despite using a Static IP and DNS that resolves fine to the http version of said service all ManagedCertificate's end up in a "Status: FailedNotVisible" state.
Outline of what I am doing:
Generating a reserved (static) external IP Address
Configuring DNS A record in CloudDNS to subdomain.domain.com to generated IP address from step 1.
Creating a ManagedCertificate named "subdomain-domain-certificate" with kubectl apply -f with spec:domains containing a single domain corresponding to subdomain.domain.com DNS record in step 2.
Creating a simple deployment and service exposing it
Creating Ingress resource referring to default backend of service in step 4 as well as annotations for static ip created in step 1 and managed certificate generated in step 3.
Confirm that Ingress is created and is assigned static IP
Visiting http://subdomain.domain.com serves the output from pod created in deployment in step 4
After a little while
kubectl describe managedcertificate subdomain-domain-certificate
results in "Status: FailedNotVisible".
Name: subdomain-domain-certificate
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-04-15T17:35:22Z
Generation: 1
Resource Version: 52637
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/subdomain-domain-certificate
UID: d8e5a0a4-5fa4-11e9-984e-42010a84001c
Spec:
Domains:
subdomain.domain.com
Status:
Certificate Name: mcrt-ac63730e-c271-4826-9154-c198d654f9f8
Certificate Status: Provisioning
Domain Status:
Domain: subdomain.domain.com
Status: FailedNotVisible
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Create 56m managed-certificate-controller Create SslCertificate mcrt-ac63730e-c271-4826-9154-c198d654f9f8
From what I understand if the Load Balancer is configured correctly (done under the hood in the ManagedCertificate resource) and the DNS (which resolves fine to the non https endpoint) checks out the certificate should go in to a Status: Active state?
The issue underlying my problem ended up being a DNSSEC misconfiguration. After running the DNS through https://dnssec-analyzer.verisignlabs.com/ I was able to identify and fix the issue.
DNSSEC was indeed not enabled for my domain but after configuring that, the ManagedCertificate configuration was still not going through and I had no clue what was going on. Deleting and re-applying the ManagedCertificate and Ingress manifests did not do the trick. But issuing the command gcloud beta compute ssl-certificates list showed several unused managed certificates hanging around and deleting them with cloud compute ssl-certificates delete NAME ..., and then restarting the configuration process did the trick in my case.
You need to make sure the domain name resolves to the IP address of your GKE Ingress, following the directions for "creating an Ingress with a managed certificate" exactly.
For more details, see the Google Cloud Load Balancing documentation. From https://cloud.google.com/load-balancing/docs/ssl-certificates#domain-status:
"The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer."
I just ran into this problem when I was setting up a new service and my allowance of 8 external IPs was used up.
Following the trouble shooting guide, I checked whether there was a forwarding rule for port 443 to my ingress.
There wasn't.
When I tried to set it up manually, I got an error telling me I used up my 8 magic addresses.
I deleted forwarding rules I didn't need et voila!
Now, why the forwarding rule for port 80 was successfully set up for the same ingress is beyond me.
I ran across this same error and found that I had created the managedCertificate in the wrong Kubernetes namespace. Once the managedCertificate was placed in the correct namespace everything worked.
After reading the trouble shooting guide, I still wasn't able to resolve my issue. When I checked the GCP ingress events, it showed that the ingress could not locate the SSL policy. Check if you missed something when creating the ingress.
and this is another reference useful to verify your k8s manifests to set up the managed certificate and ingress. Hope it helps someone.
I installed CentOS Atomic Host as operating system for kubernetes on AWS.
Everything works fine, but it seems I missed something.
I did not configure cloud provider and can not find any documentation on that.
In this question I want to know:
1. What features cloud provider gives to kubernetes?
2. How to configure AWS cloud provider?
UPD 1: external load balancer does not work; I have not tested awsElasticBlockStore yet, but I also suspect it does not work.
UPD 2:
Service details:
$ kubectl get svc nginx-service-aws-lb -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-01-02T09:51:40Z
name: nginx-service-aws-lb
namespace: default
resourceVersion: "74153"
selfLink: /api/v1/namespaces/default/services/nginx-service-aws-lb
uid: 6c28b718-b136-11e5-9bda-06c2feb29b0d
spec:
clusterIP: 10.254.172.185
ports:
- name: http-proxy-protocol
nodePort: 31385
port: 8080
protocol: TCP
targetPort: 8080
- name: https-proxy-protocol
nodePort: 31370
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I can't speak to the ProjectAtomic bits, nor to the KUBERNETES_PROVIDER env-var, since my experience has been with the CoreOS provisioner. I will talk about my experiences and see if that helps you dig a little more into your setup.
Foremost, it is absolutely essential that the controller EC2 and the worker EC2 machines have the correct IAM role that will enable the machines to make AWS calls on behalf of your account. This includes things like provisioning ELBs and working with EBS Volumes (or attaching an EBS Volume to themselves, in the case of the worker). Without that, your cloud-config experience will go nowhere. I'm pretty sure the IAM payloads are defined somewhere other than those .go files, which are hard to read, but that's the quickest link I had handy to show what's needed.
Fortunately, the answer to that question, and the one I'm about to talk about, are both centered around the apiserver and the controller-manager. The configuration of them and the logs they output.
Both the apiserver and the controller-manager have an argument that points to an on-disk cloud configuration file that regrettably isn't documented anywhere except for the source. That Zone field is, in my experience, optional (just like they say in the comments). However, it was seeing the KubernetesClusterTag that led me to follow that field around in the code to see what it does.
If your experience is anything like mine, you'll see in the docker logs of the controller-manager a bunch of error messages about how it created the ELB but could not find any subnets to attach to it; (that "docker logs" bit is presuming, of course, that ProjectAtomic also uses docker to run the Kubernetes daemons).
Once I attached a Tag named KubernetesCluster and set every instance of the Tag to the same string (it can be anything, AFAIK), then the aws_loadbalancer was able to find the subnet in the VPC and it attached the Nodes to the ELB and everything was cool -- except for the part about it can only create Internet facing ELBs, right now. :-(
Just for clarity: the aws.cfg contains a field named KubernetesClusterTag that allows you to redefine the Tag that Kubernetes will look for; without any value in that file, Kuberenetes will use the Tag name KubernetesCluster.
I hope this helps you and I hope it helps others, because once Kubernetes is up, it's absolutely amazing.
What features cloud provider gives to kubernetes?
Some features that I know: the external loadbalancer, the persistent volumes.
How to configure AWS cloud provider?
There is a environment var called KUBERNETES_PROVIDER, but it seems the env var only matters when people start a k8s cluster. Since you said "everything works fine", I guess you don't need any further configuration to use the features I mentioned above.
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
The cluster is getting automatically deleted by using these configs to create the cluster.
From https://github.com/kubernetes/kubernetes/issues/11435 the solution is to remove
kubernetes.io/cluster-service: "true"
Though without these the elasticsearch is not available through the kubernetes master.
Should i create a pull request to remove the line from the files in the repo so people dont get confused?
Firstly, I'd recommend reformatting future questions so they adhere to the stack overflow guidelines: https://stackoverflow.com/help/how-to-ask.
I'd recommend making Elasticsearch a normal Kubernetes Service. You can expose in one of the following ways:
1. Set service.Type = NodePort and access it via any public ip of node:nodePort
2. Set service.Type = LoadBalancer, this will only work on cloud providers that have loadbalancers
3. Expose the RC directly through a host port (not recommended)
Those are just the common options for accessing a Service, please see the following thread for a more detailed discussion: https://groups.google.com/forum/#!topic/kubernetes-sig-network/B-A_RuqpFWk
It's generally not a good idead to send all external traffic meant for a Kubernetes service through the apiserver. However if you must do so, you can via an endpoint such as:
/api/v1/proxy/namespaces/default/services/nginx:80/
Where default is the namespace, nginx is the name of your service and 80 is the service port (needed to disambiguate multiport services).