I'm trying to add an spring boot admin interface to some microservice that are deployed in kubernetes cluster. the spring boot admin app has the following configuration:
spring:
application:
name: administrator-interface
boot:
admin:
context-path: "/ui"
server:
use-forward-headers: true
The kubernetes cluster has an ingress that works as an api gateway:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: /admin/(.+)
backend:
serviceName: administrator-interface-back
servicePort: 8080
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
when I try to see the spring boot admin ui I had the following error:
URL in explorer: https://XXXXXX(thisisgivenBYDNS)/admin/ui
GET https://XXXXXX/ui/assets/css/chunk-common.6aba055e.css net::ERR_ABORTED 404
The URL is wrong because it should be https://XXXXXX/admin/ui/assets/css/chunk-common.6aba055e.css
It is not adding the /admin path that is given by the ingess
How can i solve this and configure an aditional path to serve the static content in the request from the right URL?
Thanks in advance
The problem is that your spring boot admin interface has no way to know that you're using "/admin" suburl.
nginx.ingress.kubernetes.io/rewrite-target: /$1 ask nginx to rewrite your url matching the second group.
So when you're hitting: https://XXXXX/admin/ui nginx rewrite your url to https://XXXXXX/ui and then send it to spring boot.
I don't know well spring boot but you should have a way to provide him a suburl so instead of serving to /ui it server to /$BASE_URL/ui.
Then weather how it works you might need to change how nginx rewrite the url by something like:
path: ^(/admin/)(.+)\
nginx.ingress.kubernetes.io/rewrite-target: $1/$2
Finally I found the solution.
Spring boot admin has a tool for that called "public URL"
spring:
application:
name: administrator-interface
boot:
admin:
context-path: "/ui"
ui:
public-url: "https://XXX/admin/ui"
server:
use-forward-headers: true
whith such configuration I am telling spring boot admin that i want to connect with a context /ui but when trying to load resources it should make the request to /admin/ui.
Now i can connect with the interface trought https:/XXX/ui and is sending request to load resources from https://XXX/admin/ui adding the prefix set by ingress
Thank you #NoƩ
Related
This is my ingress.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-ingress
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "Content-Type"
nginx.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- host: my-project.app.loc
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: user-service
port:
number: 8080
This is my UserController.java file:
#RestController
public class UserController {
#GetMapping("/hello1")
public String sayHello1() {
return "Hello World 1!";
}
#GetMapping("/hello2")
public String sayHello2() {
return "Hello World 2!";
}
}
This setup works. For example, if I call:
curl my-project.app.loc/hello1
I get Hello World 1!
However, my goal is to call:
curl my-project.app.loc/users/hello1
and get Hello World 1!
I found two working solutions:
Add #RequestMapping("/users") annotation in my UserController class.
Or
Add server.servlet.context-path=/users in application.properties file.
All this, with "/" path in Ingress file.
But I saw many example on Internet in which people use path: /users in ingress.yaml file, without using #RequestMapping annotation or editing application.properties.
I would like to know why this setup doesn't work for me and what could I do to make it work. As you can see, I've also used the rewrite-target annotation.
Thank you all in advance.
Solution:
I solved using this configuration for ingress.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: my-project.app.loc
http:
paths:
- path: /users/(.*)
pathType: ImplementationSpecific
backend:
service:
name: user-service
port:
number: 8080
Converting apiVersion: networking.k8s.io/v1beta1 ingress manifest to apiVersion: networking.k8s.io/v1 manually iam getting the below error
PFB my ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: /abc/xx/xxxx
pathType: "ImplementationSpecific"
backend:
service:
name: {{ $xxxxxxxxx }}
port:
name: http
- path: /abc/xx/xxxxxxxxx
backend:
service:
name: {{ $xxxxxxx }}
port:
name: http
pathType: "ImplementationSpecific"
But iam getting the following error
helm upgrade
upgrade.go:79: [debug] preparing upgrade for xxxxxxxxxxxx
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths): invalid type for io.k8s.api.networking.v1.HTTPIngressRuleValue.paths: got "map", expected "array"
helm.go:76: [debug] error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths): invalid type for io.k8s.api.networking.v1.HTTPIngressRuleValue.paths: got "map", expected "array"
helm.sh/helm/v3/pkg/kube.scrubValidationError
I know this the my yaml is providing a map where its expecting an array , tried multiple intentions options and fomrmattings for my ingress.Yaml , still getting the same error
Good afternoon everyone, I have a question about adding monitoring of the application itself to prometheus.
I am using spring boot actuator and see the values for prometheus accordingly: https://example.com/actuator/prometheus
I have raised prometheus via the default helm chart ( helm -n monitor upgrade -f values.yaml pg prometheus-community/kube-prometheus-stack )by adding default values for it:
additionalScrapeConfigs:
job_name: prometheus
scrape_interval: 40s
scrape_timeout: 40s
metrics_path: /actuator/prometheus
scheme: https
Prometheus itself can be found at http://ex.com/prometheus
The deployment.yaml file of my springboot application is as follows:
apiVersion : apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
annotations:
prometheus.io/path: /actuator/prometheus
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
spec:
containers:
- env:
- name: DATABASE_PASSWORD
value: {{ .Values.DATABASE_PASSWORD }}
- name: DATASOURCE_USERNAME
value: {{ .Values.DATASOURCE_USERNAME }}
- name: DATASOURCE_URL
value: jdbc:postgresql://database-postgresql:5432/dev-school
name : {{ .Release.Name }}
image: {{ .Values.container.image }}
ports:
- containerPort : 8080
However, after that prometheus still can't see my values.
Can you tell me what the error could be?
In prometheus-operator,
additionalScrapeConfigs is not used in this way.
According to documentation Additional Scrape Configuration:
AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations.
The easiest way to add new scrape config is to use a servicemonitor, like the example below:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: backend
endpoints:
- port: web
Is there a possibility in Helm to generate a file from a yaml template?
I need to create a configuration that is dynamic depending on the setup.
I need to add it as a secret/configuration file to the container when starting it.
Update:
This is the file contents that I would like to parameterize:
version: 1.4.9
port: 7054
debug: {{ $debug }}
...
tls:
enabled: {{ $tls_enable }}
certfile: {{ $tls_certfile }}
keyfile: {{ $tls_keyfile }}
....
ca:
name: {{ $ca_name }}
keyfile: {{ $ca_keyfile }}
certfile: {{ $ca_certfile }}
....
affiliations:
{{- range .Values.Organiza }}: []
All these values are
I don't have a clue how to pass this file contents into ConfigMap or any other k8s object that would generate a final version of the file.
I have an Ingress (the default GKE one) which is handling all the SSL before my services. One of my services is a WebSocket service (python autobahn). When I am exposing the service using LoadBalancer and not passing throw the ingress, using ws:// everything us working good. When instead I am exposing it using NodePort and passing through the ingress I am constantly seeing connections that are dropping even when no client is connecting. Here are the autobahnlogs:
WARNING:autobahn.asyncio.websocket.WebSocketServerProtocol:dropping connection to peer tcp:10.156.0.58:36868 with abort=False: None
When I connect using a client with wss:// the connection is successful but
a disconnection happens every few seconds (could not get a consistent number).
Although I do not think it is related I changed the timeout of the related backend service in GCE to 3600 sec and also tried to give it session affinity using both clientIP and cookie but none seems to stop the dropping connections.
Here is my ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.ingressName }}-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.staticIpName }}-static-ip
labels:
oriient-app: "rest-api"
oriient-system: "IPS"
spec:
tls:
- secretName: sslcerts
rules:
- host: {{ .Values.restApiHost }}
http:
paths:
- backend:
serviceName: rest-api-internal-service
servicePort: 80
- host: {{ .Values.dashboardHost }}
http:
paths:
- backend:
serviceName: dashboard-internal-service
servicePort: 80
- host: {{ .Values.monitorHost }}
http:
paths:
- backend:
serviceName: monitor-internal-service
servicePort: 80
- host: {{ .Values.ipsHost }}
http:
paths:
- backend:
serviceName: server-internal-ws-service
servicePort: 80
The ws service is the "server-internal-ws-service".
Any suggestions?
I did not solve the issue, but I did walk around it by exposing my wss with a LoadBalancer service and I implemented the secure layer of WebSocket by myself.
I saved the certificate (the private key and full chain public key - pem format) as a secret and mounted it as volume and then in the python used the SSLContex to and passed it to the asyncio loop create server.
For creating the certificate secret create a yaml:
apiVersion: v1
kind: Secret
type: tls
metadata:
name: sslcerts
data:
# this is base64 of your pem fullchain and private key
tls.crt: XXX
tls.key: YYY
and then
kubectl apply -f [path to the yaml above]
In your server deployment mount the secret:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
...
name: server
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
...
template:
metadata:
labels:
...
spec:
volumes:
- name: wss-ssl-certificate
secret:
secretName: sslcerts
containers:
- image: ...
imagePullPolicy: Always
name: server
volumeMounts:
- name: wss-ssl-certificate
mountPath: /etc/wss
And in the python code:
sslcontext = ssl.SSLContext()
sslcontext.load_cert_chain(/etc/wss/tls.crt, /etc/wss/tls.key)
wssIpsClientsFactory = WebSocketServerFactory()
...
loop = asyncio.get_event_loop()
coro = loop.create_server(wssIpsClientsFactory, '0.0.0.0', 9000, ssl=sslcontext)
server = loop.run_until_complete(coro)
Hope it helps someone