I use spring boot project and deployed in Kubernetes, I would like to get URL of the pod,
I referred How to Get Current Pod in Kubernetes Java Application not worked for me.
There are different environments (DEV, QA, etc..) and wanted to get URL dynamically, is there anyway?
my service yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: test-service
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/metric: concurrency
# Disable scale to zero with a minScale of 1.
autoscaling.knative.dev/minScale: "1"
# Limit scaling to 100 pods.
autoscaling.knative.dev/maxScale: "100"
spec:
containers:
- image: testImage
ports:
- containerPort: 8080
is it possible to add
valueFrom:
fieldRef:
fieldPath: status.podIP
from url https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
Your case is about Knative ingress, and not Kubernetes.
From the inbound network connectivity part of the runtime contract of the knative documentation:
In addition, the following base set of HTTP/1.1 headers MUST be set on
the request:
Host - As specified by RFC 7230 Section 5.4
Also, the following proxy-specific request headers MUST be set:
Forwarded - As specified by RFC 7239.
Look to the headers inside your request.
An example for servlet doGet method:
Enumeration<String> headerNames = request.getHeaderNames();
while(headerNames.hasMoreElements()) {
String paramName = (String)headerNames.nextElement();
out.print("<tr><td>" + paramName + "</td>\n");
String paramValue = request.getHeader(paramName);
out.println("<td> " + paramValue + "</td></tr>\n");
}
Related
Good afternoon everyone, I have a question about adding monitoring of the application itself to prometheus.
I am using spring boot actuator and see the values for prometheus accordingly: https://example.com/actuator/prometheus
I have raised prometheus via the default helm chart ( helm -n monitor upgrade -f values.yaml pg prometheus-community/kube-prometheus-stack )by adding default values for it:
additionalScrapeConfigs:
job_name: prometheus
scrape_interval: 40s
scrape_timeout: 40s
metrics_path: /actuator/prometheus
scheme: https
Prometheus itself can be found at http://ex.com/prometheus
The deployment.yaml file of my springboot application is as follows:
apiVersion : apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
annotations:
prometheus.io/path: /actuator/prometheus
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
spec:
containers:
- env:
- name: DATABASE_PASSWORD
value: {{ .Values.DATABASE_PASSWORD }}
- name: DATASOURCE_USERNAME
value: {{ .Values.DATASOURCE_USERNAME }}
- name: DATASOURCE_URL
value: jdbc:postgresql://database-postgresql:5432/dev-school
name : {{ .Release.Name }}
image: {{ .Values.container.image }}
ports:
- containerPort : 8080
However, after that prometheus still can't see my values.
Can you tell me what the error could be?
In prometheus-operator,
additionalScrapeConfigs is not used in this way.
According to documentation Additional Scrape Configuration:
AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations.
The easiest way to add new scrape config is to use a servicemonitor, like the example below:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: backend
endpoints:
- port: web
I'm running this tutorial https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html and found that the elasticsearch operator comes included with a pre-defined secret which is accessed through kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'. I was wondering how I can access it in a manifest file for a pod that will make use of this as an env var. The pod's manifest is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-depl
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
image: reactor/user
env:
- name: PORT
value: "3000"
- name: ES_SECRET
valueFrom:
secretKeyRef:
name: quickstart-es-elastic-user
key: { { .data.elastic } }
---
apiVersion: v1
kind: Service
metadata:
name: user-svc
spec:
selector:
app: user
ports:
- name: user
protocol: TCP
port: 3000
targetPort: 3000
When trying to define ES_SECRET as I did in this manifest, I get this error message: invalid map key: map[interface {}]interface {}{\".data.elastic\":interface {}(nil)}\n. Any help on resolving this would be much appreciated.
The secret returned via API (kubectl get secret ...) is a JSON-structure, where there:
{
"data": {
"elastic": "base64 encoded string"
}
}
So you just need to replace
key: { { .data.elastic } }
with
key: elastic
since it's secretKeyReference (e.g. you refer a value in some key in data (=contents) of some secret, which name you specified above). No need to worry about base64 decoding; Kubernetes does it for you.
I would like to set up Ambassador as an API Gateway for kubernetes using terraform. There are several ways how to configure Ambassador. The recommended way, according to documentation, is by using kubernetes annotations for each service that is routed and exposed outside the cluster. This is done easily using kubernetes yaml configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
The getambassador.io/config field's value starting with | suggest it is a multiline string value. How to achieve the same thing using terraform HCL?
Terraform documentation contains a section about multiline strings using <<EOF your multiline string EOF:
resource "kubernetes_service" "my-service" {
"metadata" {
name = "my-service"
annotations {
"getambassador.io/config" = <<EOF
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
EOF
}
}
"spec" {
selector {
app = "MyApp"
}
port {
protocol = "TCP"
port = 80
target_port = "9376"
}
}
}
Make sure there is no triple dash (---) from yaml configuration. Terraform parses it incorrectly.
I have an Ingress (the default GKE one) which is handling all the SSL before my services. One of my services is a WebSocket service (python autobahn). When I am exposing the service using LoadBalancer and not passing throw the ingress, using ws:// everything us working good. When instead I am exposing it using NodePort and passing through the ingress I am constantly seeing connections that are dropping even when no client is connecting. Here are the autobahnlogs:
WARNING:autobahn.asyncio.websocket.WebSocketServerProtocol:dropping connection to peer tcp:10.156.0.58:36868 with abort=False: None
When I connect using a client with wss:// the connection is successful but
a disconnection happens every few seconds (could not get a consistent number).
Although I do not think it is related I changed the timeout of the related backend service in GCE to 3600 sec and also tried to give it session affinity using both clientIP and cookie but none seems to stop the dropping connections.
Here is my ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.ingressName }}-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.staticIpName }}-static-ip
labels:
oriient-app: "rest-api"
oriient-system: "IPS"
spec:
tls:
- secretName: sslcerts
rules:
- host: {{ .Values.restApiHost }}
http:
paths:
- backend:
serviceName: rest-api-internal-service
servicePort: 80
- host: {{ .Values.dashboardHost }}
http:
paths:
- backend:
serviceName: dashboard-internal-service
servicePort: 80
- host: {{ .Values.monitorHost }}
http:
paths:
- backend:
serviceName: monitor-internal-service
servicePort: 80
- host: {{ .Values.ipsHost }}
http:
paths:
- backend:
serviceName: server-internal-ws-service
servicePort: 80
The ws service is the "server-internal-ws-service".
Any suggestions?
I did not solve the issue, but I did walk around it by exposing my wss with a LoadBalancer service and I implemented the secure layer of WebSocket by myself.
I saved the certificate (the private key and full chain public key - pem format) as a secret and mounted it as volume and then in the python used the SSLContex to and passed it to the asyncio loop create server.
For creating the certificate secret create a yaml:
apiVersion: v1
kind: Secret
type: tls
metadata:
name: sslcerts
data:
# this is base64 of your pem fullchain and private key
tls.crt: XXX
tls.key: YYY
and then
kubectl apply -f [path to the yaml above]
In your server deployment mount the secret:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
...
name: server
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
...
template:
metadata:
labels:
...
spec:
volumes:
- name: wss-ssl-certificate
secret:
secretName: sslcerts
containers:
- image: ...
imagePullPolicy: Always
name: server
volumeMounts:
- name: wss-ssl-certificate
mountPath: /etc/wss
And in the python code:
sslcontext = ssl.SSLContext()
sslcontext.load_cert_chain(/etc/wss/tls.crt, /etc/wss/tls.key)
wssIpsClientsFactory = WebSocketServerFactory()
...
loop = asyncio.get_event_loop()
coro = loop.create_server(wssIpsClientsFactory, '0.0.0.0', 9000, ssl=sslcontext)
server = loop.run_until_complete(coro)
Hope it helps someone
I have deployed two services to a Kubernetes Cluster on GCP:
One is a Spring Cloud Api Gateway implementation:
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: api-gateway
tier: web
type: NodePort
The other one is a backend chat service implementation which exposes a WebSocket at /ws/ path.
apiVersion: v1
kind: Service
metadata:
name: chat-api
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: chat
tier: web
type: NodePort
The API Gateway is exposed to internet through a Contour Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-gateway-ingress
annotations:
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: api-gateway-tls
hosts:
- api.mydomain.com.br
rules:
- host: api.mydomain.com.br
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
The gateway routes incoming calls to /chat/ path to the chat service on /ws/:
#Bean
public RouteLocator routes(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.path("/chat/**")
.filters(f -> f.rewritePath("/chat/(?<segment>.*)", "/ws/(?<segment>.*)"))
.uri("ws://chat-api"))
.build();
}
When I try to connect to the WebSocket through the gateway I get a 403 error:
error: Unexpected server response: 403
I even tried to connect using http, https, ws and wss but the error remains.
Anyone has a clue?
I had the same issue using Ingress resource with Contour 0.5.0 but I managed to solve it by
upgrading Contour to v0.6.0-beta.3 with IngressRoute (be aware, though, that it's a beta version).
You can add an IngressRoute resource (crd) like this (remove your previous ingress resource):
#ingressroute.yaml
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: api-gateway-ingress
namespace: default
spec:
virtualhost:
fqdn: api.mydomain.com.br
tls:
secretName: api-gateway-tls
routes:
- match: /
services:
- name: api-gateway
port: 80
- match: /chat
enableWebsockets: true # Setting this to true enables websocket for all paths that match /chat
services:
- name: api-gateway
port: 80
Then apply it
Websockets will be authorized only on the /chat path.
See here for more detail about Contour IngressRoute.