How do i get hostname in the kubernetes pod - go

I have 3 ingress pointing to the same service. In my kubernetes pod, how can i find the hostname and from which subdomain request is coming . my backend code in golang server.
when the request comes to any pod, i want to know from which subdomain(x,y,x) request has come to pod. Currently in the golang code it's giving hostname as pod ip address
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/browser-xss-filter: 'true'
ingress.kubernetes.io/force-hsts: 'true'
ingress.kubernetes.io/hsts-include-subdomains: 'true'
ingress.kubernetes.io/hsts-max-age: '315360000'
name: test
namespace: test
spec:
rules:
- host: http://x.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 8080
path: /
- host: http://y.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 8080
path: /
- host: http://z.test.com
http:
paths:
- backend:
serviceName: test-service
servicePort: 8080
path: /
func Subdomain( r *http.Request) {
host := r.URL.Host
host = strings.TrimSpace(host)
//Figure out if a subdomain exists in the host given.
host_parts := strings.Split(host, ".")
if len(host_parts) > 2 {
//The subdomain exists, we store it as the first element
//in a new array
subdomain := []string{host_parts[0]}
}
}

Try to use the X-Forwarded-Host header added by the ingress controller

Related

how to fix swagger-ui path

There was a problem with the Swagger document path.
build.gradle
implementation("org.springdoc:springdoc-openapi-ui:1.6.8")
application.properties
server.servlet.context-path: /poo
REQ : http://localhost:port/poo/swagger-ui/index.html
-> Failed to load remote configuration.
REQ : http://localhost:port/poo//swagger-ui/index.html
-> OK
Can be Ingress, try this
Change this
spec:
rules:
- http:
paths:
- path: /my-context/my-path/swagger
pathType: ImplementationSpecific
backend:
service:
name: my-service #.ns-config.svc.cluster.local
port:
number: 8080
For this
spec:
rules:
- http:
paths:
- backend:
service:
name: my-service #.ns-config.svc.cluster.local
port:
number: 8080
path: /my-context/my-path/swagger
pathType: ImplementationSpecific
Might check this comment describing the best approach to using spring-doc behind custom context-path or proxy.

Is it possible to have a single ingress resource for all mulesoft applications in RTF in Self Managed Kubernetes on AWS?

Can we have a single ingress resource for deployment of all mulesoft applications in RTF in Self Managed Kubernetes on AWS?
Ingress template:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: temp1-svc
port:
number: 80
- pathType: Prefix
path: /
backend:
service:
name: temp2-svc
port:
number: 80
temp1-svc:
apiVersion: v1
kind: Service
metadata:
name: temp1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp1-svc
temp2-svc:
apiVersion: v1
kind: Service
metadata:
name: temp2-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp2-svc
I am new to RTF, any changes to be done in Ingress resource or do we need to have separate ingress resource for each application? Any help would be appreciated.
Thanks
Generally managing different, ingress if good option.
You can also use the single ingress routing and forwarding traffic across the cluster.
Single ingress for all services
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service
port:
number: 80
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service-2
port:
number: 80
the benefit of multiple ingress resources or separate ingress is that you can keep and configure the different annotations to your ingress.
In one, you want to enable CORS while in another you want to change proxy body head etc. So it's better to manage ingress for each microservice.

POST https://apm.<acme>.com/intake/v2/rum/events net::ERR_BLOCKED_BY_CLIENT

I have an elastic stack on kubernetes (k8s) using ECK.
Kibana version:
7.13.2
Elasticsearch version:
7.13.2
APM Server version:
7.13.2
APM Agent language and version:
https://www.npmjs.com/package/#elastic/apm-rum - 5.9.1
Browser version:
Chrome latest
Description of the problem
Frontend apm-run agent fails to send messages to apm server. if i disable cors on the browser it works - google-chrome --disable-web-security --user-data-dir=temp then navigate to my frontend http://localhost:4201/
[Elastic APM] Failed sending events! Error: https://apm.<redacted>.com/intake/v2/rum/events HTTP status: 0
at ApmServer._constructError (apm-server.js:120)
at eval (apm-server.js:48)
POST https://apm.<acme>.com/intake/v2/rum/events net::ERR_BLOCKED_BY_CLIENT
Code:
apm.yml
apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
name: apm-server-prod
namespace: elastic-system
spec:
version: 7.13.2
count: 1
elasticsearchRef:
name: "elasticsearch-prod"
kibanaRef:
name: "kibana-prod"
http:
service:
spec:
type: NodePort
config:
apm-server:
rum.enabled: true
ilm.enabled: true
elastic.ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: elastic-ingress
namespace: elastic-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: "<redacted>"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/backend-protocol: 'HTTPS'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:<rd>:certificate/0250a551-8971-468d-a483-cad28f890463
alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=<redacted>-aws-ingress-logs,access_logs.s3.prefix=dev-ingress
spec:
rules:
- host: elasticsearch.<redacted>.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: elasticsearch-prod-es-http
port:
number: 9200
- host: kibana.<redacted>.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-prod-kb-http
port:
number: 5601
- host: apm.<redacted>.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apm-server-prod-apm-http
port:
number: 8200
frontend.js
import { init as initApm } from "#elastic/apm-rum";
import config from "<redacted>-web/config/environment";
export const apm = initApm({
serviceName: "frontend",
serverUrl: "https://apm.<redacted>.com",
environment: config.environment,
logLevel: "debug",
});
Errors in browser console:
apm server:
apm-server pod does not apear to display any errors in this case, i assume the client never reaches the server .
I was running into the same problem. Check your ad blocker. I found that UBlock was blocking requests to */rum/events.
I'm guessing that they consider this as a type of user "tracker" and that's why they're blocked, really no way around it though unless you change the endpoint path I guess.

The Kubernetes ingress doesn't allow Websocket communication over HTTPS

I am trying to host my WebAPI service in kubernetes over HTTPS. The service has inbound communications over port 80 and outbound websocket communication over port 81. I have configured AKS Ingress controller with a static IP assigned to it.
The application is working just fine in HTTP protocol but when I enabled TLS certificate to enable HTTPS, the websocket communication is not working any more.
Ingress.yaml
apiVersion: extensions/v1beta1
metadata:
name: mywebapiservice
namespace: mywebapi
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: 'true'
kubernetes.io/ingress.class: mywebapi_customclass
kubernetes.io/tls-acme: 'true'
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-connect-timeout: '7200'
nginx.ingress.kubernetes.io/proxy-read-timeout: '7200'
nginx.ingress.kubernetes.io/proxy-send-timeout: '7200'
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
tls:
- hosts:
- mywebapi.eastus.cloudapp.azure.com
secretName: my-tls-secret
rules:
- host: mywebapi.eastus.cloudapp.azure.com
http:
paths:
- path: >-
/app/api/ServiceA/subscription
backend:
serviceName: ServiceA
servicePort: 81
- path: >-
/app/api/ServiceA/(.+)
backend:
serviceName: ServiceA
servicePort: 80
- path: >-
/app/api/ServiceB/subscription
backend:
serviceName: ServiceB
servicePort: 81
- path: >-
/app/api/ServiceB/(.+)
backend:
serviceName: ServiceB
servicePort: 80
status:
loadBalancer:
ingress:
- ip: xx.xx.xxx.xxx
The c# code await listener.GetContextAsync() is never executed.
listener = new HttpListener();
listener.Prefixes.Add("wss://*:81/app/api/ServiceB/subscription/");
var listenerContext = await listener.GetContextAsync();
The error that I get at client side : Error during WebSocket handshake: Unexpected response code: 503
Please suggest if you find something wrong here.

GKE Ingress dropping websocket connections when using https

I have an Ingress (the default GKE one) which is handling all the SSL before my services. One of my services is a WebSocket service (python autobahn). When I am exposing the service using LoadBalancer and not passing throw the ingress, using ws:// everything us working good. When instead I am exposing it using NodePort and passing through the ingress I am constantly seeing connections that are dropping even when no client is connecting. Here are the autobahnlogs:
WARNING:autobahn.asyncio.websocket.WebSocketServerProtocol:dropping connection to peer tcp:10.156.0.58:36868 with abort=False: None
When I connect using a client with wss:// the connection is successful but
a disconnection happens every few seconds (could not get a consistent number).
Although I do not think it is related I changed the timeout of the related backend service in GCE to 3600 sec and also tried to give it session affinity using both clientIP and cookie but none seems to stop the dropping connections.
Here is my ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.ingressName }}-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.staticIpName }}-static-ip
labels:
oriient-app: "rest-api"
oriient-system: "IPS"
spec:
tls:
- secretName: sslcerts
rules:
- host: {{ .Values.restApiHost }}
http:
paths:
- backend:
serviceName: rest-api-internal-service
servicePort: 80
- host: {{ .Values.dashboardHost }}
http:
paths:
- backend:
serviceName: dashboard-internal-service
servicePort: 80
- host: {{ .Values.monitorHost }}
http:
paths:
- backend:
serviceName: monitor-internal-service
servicePort: 80
- host: {{ .Values.ipsHost }}
http:
paths:
- backend:
serviceName: server-internal-ws-service
servicePort: 80
The ws service is the "server-internal-ws-service".
Any suggestions?
I did not solve the issue, but I did walk around it by exposing my wss with a LoadBalancer service and I implemented the secure layer of WebSocket by myself.
I saved the certificate (the private key and full chain public key - pem format) as a secret and mounted it as volume and then in the python used the SSLContex to and passed it to the asyncio loop create server.
For creating the certificate secret create a yaml:
apiVersion: v1
kind: Secret
type: tls
metadata:
name: sslcerts
data:
# this is base64 of your pem fullchain and private key
tls.crt: XXX
tls.key: YYY
and then
kubectl apply -f [path to the yaml above]
In your server deployment mount the secret:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
...
name: server
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
...
template:
metadata:
labels:
...
spec:
volumes:
- name: wss-ssl-certificate
secret:
secretName: sslcerts
containers:
- image: ...
imagePullPolicy: Always
name: server
volumeMounts:
- name: wss-ssl-certificate
mountPath: /etc/wss
And in the python code:
sslcontext = ssl.SSLContext()
sslcontext.load_cert_chain(/etc/wss/tls.crt, /etc/wss/tls.key)
wssIpsClientsFactory = WebSocketServerFactory()
...
loop = asyncio.get_event_loop()
coro = loop.create_server(wssIpsClientsFactory, '0.0.0.0', 9000, ssl=sslcontext)
server = loop.run_until_complete(coro)
Hope it helps someone

Resources