I checked the logs of istio-proxy sidecar containers in the api-service deployment pods and in the default istio-ingressgateway deployment. The path remains the same, unwritten from ingressgateway to my service. I expect requests to look something like:
Client: 'GET mysite.com/api/some-resource/123/'
||
||
VV
Ingressgateway: 'GET mysite.com/api/some-resource/123/'
||
||
VV
VirtualService: rewrite.uri: /
||
||
VV
api-service: 'GET mysite.com/some-resource/123/'
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-route-rules
spec:
hosts:
- mysite.com
gateways:
- istio-system/mysite-gateway
http:
- match:
- uri:
prefix: /api
rewrite:
uri: /
route:
- destination:
host: api-service.default.svc.cluster.local
port:
number: 7000
- route:
- destination:
host: web-experience.default.svc.cluster.local
port:
number: 9000
I've found that the redirection was actually working, but the envoy sidecar does not reflect that in its logs as I had assumed.
I inferred from the docs that the envoy sidecar would log the rewritten path (Look in the Description of the rewrite Field):
Rewrite will be performed before forwarding.
I checked the access logs for my web server running in my api-service deployment and found malformed requests: GET //some-resource/123/ (from /api/some-resource/123/).
Turns out the extra / (from rewrite.url: /) was causing 404 errors. A Github comment from an istio issue presented a fix: whitespace.
As the user warns, it's uncertain whether this behavior is intended.
Related
I am trying to set up authentication with Okta for elastic stack on google cloud. The link from OKTA has the first step to route the cluster address through a certain endpoint and path as shown here
Well, I have an ingress shown thus:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- x.x.x.net
rules:
- host: x.x.x.net
http:
paths:
- path: /api/security/v1/saml
pathType: Prefix
backend:
service:
name: kibana-kibana
port:
number: 5601
But every time, I tried to check the hosts from the browser with the path identified as well, it gives an error page as shown thus:
{"statusCode":404,"error":"Not Found","message":"Not Found"}
WHat could possibly cause this that I cannot access the hosts from my browser meanwhile if I remove the path, the dashboard is accessible.
Thanks
I have a domain foobar. When I started my project, I knew I would have my webserver handling traffic for foobar.com. Also, I plan on having an elasticsearch server I wanted running at es.foobar.com. I purchased my domain at GoDaddy and I (maybe prematurely) purchased a single site certificate for foobar.com. I can't change this certificate to a wildcard cert. I would have to purchase a new one. I have my DNS record routing traffic for that simple URL. I'm managing everything using Kubernetes.
Questions:
Is it possible to use my simple single-site certificate for the main site and subdomains like my elasticsearch server or do I need to purchase another single-site certificate specifically for the elasticsearch server? I checked earlier and GoDaddy wants $350 for the multisite one.
ElasticSearch complicates this somewhat since if it's being accessed at es.foobar.com and the cert is for foobar.com it's going to reject any requests, right? Elasticsearch needs a cert in order to have solid security.
Is it possible to use my simple single-site certificate for the main site and subdomains?
To achieve your goal, you can use Name based virtual hosting ingress, since most likely your webserver foobar.com and elasticsearch es.foobar.com work on different ports and will be available under the same IP.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webserver
port:
number: 80
- host: es.foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: elastic
port:
number: 9200-9300 #http.port parametr in elastic config
It can also be implemented using TLS private key and certificate and and creating a file for TLS. This is possible for just one level, like *.foobar.com.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
tls:
- hosts:
- foobar.com
- es.foobar.com
secretName: "foobar-secret-tls"
rules:
- host: foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: webserver
port:
number: 80
- host: es.foobar.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: elastic
port:
number: 9200-9300 #http.port parametr in elastic config
Either you need to get a wildcard or separate certificate for another domain.
I have an GKE ingress with both Http and Https. I want to redirect the traffic from port 80 to port 443.
I found this:
https://github.com/kubernetes/ingress-gce/issues/1075
which let to this:
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect.
The proposed solution adds a FrontendConfig with a RedirectToHttps flag which uses some LoadBalancer functionality. Yet when I try to add the FrontendEndConfig, I get the following error:
error: unable to recognize "ssl.yaml": no matches for kind "FrontendConfig" in version "networking.gke.io/v1beta1"
I have also tried 'networking.gke.io/v1' and 'v1beta2'.
The latest GKE version available in my zone is 1.17.13-gke.2001. I have recently launched the cluster so although I don't know how to check the GKE version, I reckon it's running on the latest version.
Anyone has a clue why my kubectl doesn't recognize this kind?
Ingress yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
FrontendConfig: my-frontend-config
kubernetes.io/ingress.global-static-ip-name: 'web-static-ip'
networking.gke.io/managed-certificates: mycertificate
# kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: frontend
servicePort: 80
- path: /api/*
backend:
serviceName: backend
servicePort: 80
Redirect yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
Thank you for pointing me in the right direction!
I had to upgrade the cluster as MrKoopaKiller indicated and also changed the annotation:
FrontendConfig: my-frontend-config
to:
networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config"
and it worked!
also: make sure you have:
kubernetes.io/ingress.allow-http: "true"
Learning k8s + istio here. I've setup a 2 nodes + 1 master cluster with kops. I have Istio as ingress controller. I'm trying to set up OIDC Auth for a dummy nginx service. I'm hitting a super weird bug I have no idea where it's coming from.
So, I have a
Keycloak service
Nginx service
The keycloak service runs on keycloak.example.com
The nginx service runs on example.com
There is a Classic ELB on AWS to serve that.
There are Route53 DNS records for
ALIAS example.com dualstack.awdoijawdij.amazonaws.com
ALIAS keycloak.example.com dualstack.awdoijawdij.amazonaws.com
When I was setting up the keycloak service, and there was only that service, I had no problem. But when I added the dummy nginx service, I started getting this.
I would use firefox to go to keycloak.example.com, and get a 404. If I do a hard refresh, then the page loads.
Then I would go to example.com, and would get a 404. If I do a hard refresh, then the page loads.
If I do a hard refresh on one page, then when I go to the other page, I will have to do a hard reload or I get a 404. It's like some DNS entry is toggling between these two things whenever I do the hard refresh. I have no idea on how to debug this.
If I
wget -O- example.com I have a 301 redirect to https://example.com as expected
wget -O- https://example.com I have a 200 OK as expected
wget -O- keycloak.example.com I have a 301 redirect to https://keycloak.example.com as expected
wget -O- https://keycloak.example.com I have a 200 OK as expected
Then everything is fine. Seems like the problem only occurs in the browser.
I tried opening the pages in Incognito mode, but the problem persists.
Can someone help me in debugging this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "example.com"
gateways:
- nginx-gateway
http:
- route:
- destination:
port:
number: 80
host: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "keycloak.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "keycloak.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "keycloak.example.com"
gateways:
- keycloak-gateway
http:
- route:
- destination:
port:
number: 80
host: keycloak-http
The problem was that I was using the same certificate for both Gateways, hence resulting in keeping the same tcp connection for both services.
There is a discussion about it here https://github.com/istio/istio/issues/9429
By using a different certificate for both Gateway ports, the problem disappears
I have a bare-metal kubernetes cluster (1.13) and am running nginx ingress controller (deployed via helm into the default namespace, v0.22.0).
I have an ingress in a different namespace that attempts to use the nginx controller.
#ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myapp
annotations:
kubernetes.io/backend-protocol: https
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
tls:
- hosts:
- my-host
secretName: tls-cert
rules:
- host: my-host
paths:
- backend:
servicename: my-service
servicePort: https
path: "/api/(.*)"
The nginx controller successfully finds the ingress, and says that there are endpoints. If I hit the endpoint, I get a 400, with no content. If I turn on custom-http-headers then I get a 404 from nginx; my service is not being hit. According to re-write logging, the url is being re-written correctly.
I have also hit the service directly from inside the pod, and that works as well.
#service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- name: https
protocol: TCP
port: 5000
targetPort: https
selector:
app: my-app
clusterIP: <redacted>
type: ClusterIP
sessionAffinity: None
What could be going wrong?
EDIT: Disabling https all over still gives the same 400 error. However, if my app is expecting HTTPS requests, and nginx is sending HTTP requests, then the requests get to the app (but it can't processes them)
Nginx will silently fail with 400 if request headers are invalid (like special characters in it). You can debug that using tcpdump.