kubernetes liveliness is un-authorized - spring

I am trying to define a livenessProbe by passing the value of httpheader as secret. but I am getting un-authorized 401.
- name: mycontainer
image: myimage
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: actuatortoken
key: token
livenessProbe:
httpGet:
path: /test/actuator/health
port: 9001
httpHeaders:
- name: Authorization
value: $MY_SECRET
My secret as follows:
apiVersion: v1
kind: Secret
metadata:
name: actuatortoken
type: Opaque
stringData:
token: Bearer <token>
If I pass the same with actual value as below... it works as expected
- name: mycontainer
image: myimage
livenessProbe:
httpGet:
path: /test/actuator/health
port: 9001
httpHeaders:
- name: Authorization
value: Bearer <token>
Any help is highly appreciated.

What you have will put the literal string $MY_SECRET as the Authorization header which won't work.
You don't want to put the actual value of the secret in your Pod/Deployment/whatever YAML since you don't want plaintext credentials in there.
3 options I can think of:
a) change your app to not require authentication for the /test/actuator/health endpoint;
b) change your app to not require authentication when the requested host is 127.0.0.1 and update the probe configuration to use that as the host;
c) switch from an HTTP probe to a command probe and write the curl/wget command yourself
Answer is being posted as Community wiki as it's from Amit Kumar Gupta comments.

Related

Istio Gateway Fail To Connect Via HTTPS

Deployments in a GKE cluster with Istio is working correctly via HTTP. But when i tried to secure it with cert-manager with following resources, HTTPS request fails state like so on curl
`Immediate connect fail for 64:ff9b::2247:fd8a: Network is unreachable
* connect to 34.71.253.138 port 443 failed: Connection refused`.
What should i do to make it work with HTTPS as well.
ClusterIssuer with following configuration
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: iprocureservers#iprocu.re
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
solvers:
# ACME DNS-01 provider configurations
- dns01:
# Google Cloud DNS
clouddns:
# Secret from the google service account key
serviceAccountSecretRef:
name: cert-manager-credentials
key: gcp-dns-admin.json
# The project in which to update the DNS zone
project: iprocure-server
Certificate configuration like so, which made a certifiate in a Ready:True state
apiVersion: cert-manager.io/v1alpha3
kind: Certificate
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
secretName: letsencrypt-staging
commonName: "*.iprocure.tk"
dnsNames:
- '*.iprocure.tk'
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
And lastly a Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: iprocure-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: false
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: letsencrypt-staging
If i do, kubectl describe certificate -n istio-system
Name: letsencrypt-staging
Namespace: istio-system
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2020-10-13T13:32:37Z
Generation: 1
Resource Version: 28030994
Self Link: /apis/cert-manager.io/v1/namespaces/istio-system/certificates/letsencrypt-staging
UID: ad838d28-5349-4aaa-a618-cc3bfc316e6e
Spec:
Common Name: *.iprocure.tk
Dns Names:
*.iprocure.tk
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging-clusterissuer
Secret Name: letsencrypt-staging-cert-secret
Status:
Conditions:
Last Transition Time: 2020-10-13T13:35:05Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2021-01-11T12:35:05Z
Not Before: 2020-10-13T12:35:05Z
Renewal Time: 2020-12-12T12:35:05Z
Revision: 1
Events: <none>
Running kubectl get certificates -o wide -n istio-system, yields
NAME READY SECRET ISSUER STATUS AGE
letsencrypt-staging True letsencrypt-staging-cert-secret letsencrypt-staging-clusterissuer Certificate is up to date and has not expired 17h
Issue
I assume that https wasn't working because of the requirements which have to be enabled if you want to use cert-menager with istio in older versions.
Solution
As #Yunus Einsteinium mentioned in comments
Thank you for guiding me in the right direction. Using the OSS Istio, not the GKE one, is the way to go! I managed to make HTTPS work!
So the solution here was to use OOS istio installed with istioctl instead of the older istio gke addon.

K8s + Istio + Firefox hard refresh. Accessing service cause 404 on another service, until other service accessed

Learning k8s + istio here. I've setup a 2 nodes + 1 master cluster with kops. I have Istio as ingress controller. I'm trying to set up OIDC Auth for a dummy nginx service. I'm hitting a super weird bug I have no idea where it's coming from.
So, I have a
Keycloak service
Nginx service
The keycloak service runs on keycloak.example.com
The nginx service runs on example.com
There is a Classic ELB on AWS to serve that.
There are Route53 DNS records for
ALIAS example.com dualstack.awdoijawdij.amazonaws.com
ALIAS keycloak.example.com dualstack.awdoijawdij.amazonaws.com
When I was setting up the keycloak service, and there was only that service, I had no problem. But when I added the dummy nginx service, I started getting this.
I would use firefox to go to keycloak.example.com, and get a 404. If I do a hard refresh, then the page loads.
Then I would go to example.com, and would get a 404. If I do a hard refresh, then the page loads.
If I do a hard refresh on one page, then when I go to the other page, I will have to do a hard reload or I get a 404. It's like some DNS entry is toggling between these two things whenever I do the hard refresh. I have no idea on how to debug this.
If I
wget -O- example.com I have a 301 redirect to https://example.com as expected
wget -O- https://example.com I have a 200 OK as expected
wget -O- keycloak.example.com I have a 301 redirect to https://keycloak.example.com as expected
wget -O- https://keycloak.example.com I have a 200 OK as expected
Then everything is fine. Seems like the problem only occurs in the browser.
I tried opening the pages in Incognito mode, but the problem persists.
Can someone help me in debugging this?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx
spec:
hosts:
- "example.com"
gateways:
- nginx-gateway
http:
- route:
- destination:
port:
number: 80
host: nginx
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "keycloak.example.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ingress-cert
hosts:
- "keycloak.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "keycloak.example.com"
gateways:
- keycloak-gateway
http:
- route:
- destination:
port:
number: 80
host: keycloak-http
The problem was that I was using the same certificate for both Gateways, hence resulting in keeping the same tcp connection for both services.
There is a discussion about it here https://github.com/istio/istio/issues/9429
By using a different certificate for both Gateway ports, the problem disappears

Spring Boot, Minikube, Istio and Keycloak: "Invalid parameter: redirect_uri"

I have an application running in Minikube that works with the ingress-gateway as expected. A spring boot app is called, the view is displayed and a protected resource is called via a link. The call is forwarded to Keycloak and is authorized via the login mask and the protected resource is displayed as expected.
With Istio the redirecting fails with the message: "Invalid parameter: redirect_uri".
My Istio Gateway config
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio-system
name: istio-bomc-app-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
My virtualservice config
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: istio-bomc-app-hrm-virtualservice
namespace: bomc-app
spec:
hosts:
- "*"
gateways:
- istio-bomc-app-gateway.istio-system.svc.cluster.local
http:
- match:
- uri:
prefix: /bomc-hrm
route:
- destination:
host: bomc-hrm-service.bomc-app.svc.cluster.local
port:
number: 80
After clicking the protected link, I get the following URI in the browser:
http://192.168.99.100:31380/auth/realms/bomc-hrm-realm/protocol/openid-connect/auth?response_type=code&client_id=bomc-hrm-app&redirect_uri=http%3A%2F%2F192.168.99.100%2Fbomc-hrm%2Fui%2Fcustomer%2Fcustomers&state=4739ab56-a8f3-4f78-bd29-c05e7ea7cdbe&login=true&scope=openid
I see the redirect_uri=http%3A%2F%2F192.168.99.100%2F is not complete. The port 31380 is missing.
How does Istio VirtualService need to be configured?
Have you checked the following command into Google Cloud
Maybe you will have clues using it
kubectl describe
Check Kube

Openshift secret in Spring Boot bootstrap.yml

This how my bootstrap.yml looks like.
spring:
cloud:
config:
uri: http://xxxx.com
username: ****
password: ****
vault:
host: vault-server
port: 8200
scheme: http
authentication: token
token: ${VAULT_ROOT_TOKEN}
application:
name: service-name
management:
security:
enabled: false
Application is starting when I configure secret as a ENV variable in Deployment Config – OSE, as below.
name: VAULT_ROOT_TOKEN
value: *********
But Configuring secret as a ENV variable and fetching the value from OSE secret is not working.
name: VAULT_ROOT_TOKEN
valueFrom:
secretKeyRef:
name: vault-token
key: roottoken
Error that I am getting is
org.springframework.vault.VaultException: Status 400 secret/service-name/default: 400 Bad Request: missing required Host header
Surprise in this scenario, ENV variable is working within the container/POD but somehow it is not able to fetch during the bootstrap procedure.
env | grep TOKEN
VAULT_ROOT_TOKEN=********
My secret configuration in OSE
oc describe secret vault-token
Name: vault-token
Namespace: ****
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
roottoken: 37 bytes
What is missing in my deployment-config or secrets in OSE? How to configure to fetch secret as ENV variable and inject in the bootstrap.yml file?
NOTE : I can't move Vault configuration out of bootstrap.yml.
Openshift Enterprise info:
Version:
OpenShift Master:v3.2.1.31
Kubernetes Master:v1.2.0-36-g4a3f9c5
Finally I was able to achieve this. This is what I have done
Provide the token as an arugument:
java $JAVA_OPTS -jar -Dspring.cloud.vault.token=${SPRING_CLOUD_VAULT_TOKEN} service-name.jar
This is how my configuration looks like:
Deployment Config:
- name: SPRING_CLOUD_VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-token
key: roottoken
Bootstrap file:
spring:
cloud:
config:
uri: http://xxxx.com
username: ****
password: ****
vault:
host: vault-server
port: 8200
scheme: http
authentication: token
token: ${SPRING_CLOUD_VAULT_TOKEN}
application:
name: service-name
management:
security:
enabled: false
Thanks for my colleagues who has provided the inputs.

How do I access this Kubernetes service via kubectl proxy?

I want to access my Grafana Kubernetes service via the kubectl proxy server, but for some reason it won't work even though I can make it work for other services. Given the below service definition, why is it not available on http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana?
grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: grafana
labels:
app: grafana
spec:
type: NodePort
ports:
- name: web
port: 3000
protocol: TCP
nodePort: 30902
selector:
app: grafana
grafana-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: monitoring
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:4.1.1
env:
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
volumeMounts:
- name: grafana-storage
mountPath: /var/grafana-storage
ports:
- name: web
containerPort: 3000
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 200m
- name: grafana-watcher
image: quay.io/coreos/grafana-watcher:v0.0.5
args:
- '--watch-dir=/var/grafana-dashboards'
- '--grafana-url=http://localhost:3000'
env:
- name: GRAFANA_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GRAFANA_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
resources:
requests:
memory: "16Mi"
cpu: "50m"
limits:
memory: "32Mi"
cpu: "100m"
volumeMounts:
- name: grafana-dashboards
mountPath: /var/grafana-dashboards
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-dashboards
configMap:
name: grafana-dashboards
The error I'm seeing when accessing the above URL is "no endpoints available for service "grafana"", error code 503.
With Kubernetes 1.10 the proxy URL should be slighly different, like this:
http://localhost:8080/api/v1/namespaces/default/services/SERVICE-NAME:PORT-NAME/proxy/
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls
As Michael says, quite possibly your labels or namespaces are mismatching. However in addition to that, keep in mind that even when you fix the endpoint, the url you're after (http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana) might not work correctly.
Depending on your root_url and/or static_root_path grafana configuration settings, when trying to login you might get grafana trying to POST to http://localhost:8001/login and get a 404.
Try using kubectl port-forward instead:
kubectl -n monitoring port-forward [grafana-pod-name] 3000
then access grafana via http://localhost:3000/
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
The issue is that Grafana's port is named web, and as a result one needs to append :web to the kubectl proxy URL: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web.
An alternative, is to instead not name the Grafana port, because then you don't have to append :web to the kubectl proxy URL for the service: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web. I went with this option in the end since it's easier.
There are a few factors that might be causing this issue.
The service expects to find one or more supporting endpoints, which it discovers through matching rules on the labels. If the labels don't align, then the service won't find endpoints, and the network gateway function performed by the service will result in 503.
The port declared by the POD and the process within the container are misaligned from the --target-port expected by the service.
Either one of these might generate the error. Let's take a closer look.
First, kubectl describe the service:
$ kubectl describe svc grafana01-grafana-3000
Name: grafana01-grafana-3000
Namespace: default
Labels: app=grafana01-grafana
chart=grafana-0.3.7
component=grafana
heritage=Tiller
release=grafana01
Annotations: <none>
Selector: app=grafana01-grafana,component=grafana,release=grafana01
Type: NodePort
IP: 10.0.0.197
Port: <unset> 3000/TCP
NodePort: <unset> 30905/TCP
Endpoints: 10.1.45.69:3000
Session Affinity: None
Events: <none>
Notice that my grafana service has 1 endpoint listed (there could be multiple). The error above in your example indicates that you won't have endpoints listed here.
Endpoints: 10.1.45.69:3000
Let's take a look next at the selectors. In the example above, you can see I have 3 selector labels on my service:
Selector: app=grafana01-grafana,component=grafana,release=grafana01
I'll kubectl describe my pods next:
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
Namespace: default
Node: 10.10.25.220/10.10.25.220
Start Time: Fri, 14 Jul 2017 03:25:11 +0000
Labels: app=grafana01-grafana
component=grafana
pod-template-hash=1843344063
release=grafana01
...
Notice that the labels on the pod align correctly, hence my service finds pods which provide endpoints which are load balanced against by the service. Verify that this part of the chain isn't broken in your environment.
If you do find that the labels are correct, you may still have a disconnect in that the grafana process running within the container within the pod is running on a different port than you expect.
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
...
Containers:
grafana:
Container ID: docker://69f11b7828c01c5c3b395c008d88e8640c5606f4d865107bf4b433628cc36c76
Image: grafana/grafana:latest
Image ID: docker-pullable://grafana/grafana#sha256:11690015c430f2b08955e28c0e8ce7ce1c5883edfc521b68f3fb288e85578d26
Port: 3000/TCP
State: Running
Started: Fri, 14 Jul 2017 03:25:26 +0000
If for some reason, your port under the container listed a different value, then the service is effectively load balancing against an invalid endpoint.
For example, if it listed port 80:
Port: 80/TCP
Or was an empty value
Port:
Then even if your label selectors were correct, the service would never find a valid response from the pod and would remove the endpoint from the rotation.
I suspect your issue is the first problem above (mismatched label selectors).
If both the label selectors and ports align, then you might have a problem with the MTU setting between nodes. In some cases, if the MTU used by your networking layer (like calico) is larger than the MTU of the supporting network, then you'll never get a valid response from the endpoint. Typically, this last potential issue will manifest itself as a timeout rather than a 503 though.
Your Deployment may not have a label app: grafana, or be in another namespace. Could you also post the Deployment definition?

Resources