Kubernetes Traefik v2.3.0 - Web UI 404 Not Found after removing --api.insecure - user-interface
I'm running Traefik v2.3.0 in a AKS (Azure Kubernetes Service) Cluster and i'm currently trying to setup a Basic Authentication on my Traefik UI.
The dashboard (Traefik UI) works fine without any authentication but i'm getting the server not found page when I try to access with a Basic Authentication.
Here is my configuration.
IngressRoute, Middleware for BasicAuth, Secret and Service :
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`traefik-ui.domain.com`) && PathPrefix(`/`) || PathPrefix(`/dashboard`)
services:
- name: traefik-ui
port: 80
middlewares:
- name: traefik-ui-auth
namespace: ingress-basic
tls:
secretName: traefik-ui-cert
---
apiVersion: v1
kind: Secret
metadata:
name: traefik-secret
namespace: ingress-basic
data:
users: |2
dWlhZG06JGFwcjEkanJMZGtEb1okaS9BckJmZzFMVkNIMW80bGtKWFN6LwoK
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: traefik-ui-auth
namespace: ingress-basic
spec:
basicAuth:
secret: traefik-secret
---
apiVersion: v1
kind: Service
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: traefik-ingress-lb
sessionAffinity: None
type: ClusterIP
DaemonSet and Service:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress
namespace: ingress-basic
spec:
selector:
matchLabels:
app: traefik-ingress-lb
template:
metadata:
labels:
app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- small
containers:
- args:
- --api.dashboard=true
- --accesslog
- --accesslog.fields.defaultmode=keep
- --accesslog.fields.headers.defaultmode=keep
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.metrics.address=:8082
- --providers.kubernetesIngress.ingressClass=traefik-cert-manager
- --certificatesresolvers.default.acme.email=info#domain.com
- --certificatesresolvers.default.acme.storage=acme.json
- --certificatesresolvers.default.acme.tlschallenge
- --providers.kubernetescrd
- --ping=true
- --pilot.token=xxxxxx-xxxx-xxxx-xxxxx-xxxxx-xx
- --metrics.statsd=true
- --metrics.statsd.address=localhost:8125
- --metrics.statsd.addEntryPointsLabels=true
- --metrics.statsd.addServicesLabels=true
image: traefik:v2.3.0
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
ports:
- containerPort: 80
name: web
protocol: TCP
- containerPort: 8080
name: admin
protocol: TCP
- containerPort: 443
name: websecure
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /acme/acme.json
name: acme
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: traefik-ingress
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoSchedule
key: size
operator: Equal
value: small
volumes:
- hostPath:
path: /srv/configs/acme.json
type: ""
name: acme
With this configuration :
kubectl exec -it -n ingress-basic traefik-ingress-2m88q -- curl http://localhost:8080/dashboard/
404 page not found
When removing the Middleware and adding "--api.insecure" in the DaemonSet config :
kubectl exec -it -n ingress-basic traefik-ingress-1hf4q -- curl http://localhost:8080/dashboard/
<!DOCTYPE html><html><head><title>Traefik</title><meta charset=utf-8><meta name=description content="Traefik UI"><meta name=format-detection content="telephone=no"><meta name=msapplication-tap-highlight content=no><meta name=viewport content="user-scalable=no,initial-scale=1,maximum-scale=1,minimum-scale=1,width=device-width"><link rel=icon type=image/png href=statics/app-logo-128x128.png><link rel=icon type=image/png sizes=16x16 href=statics/icons/favicon-16x16.png><link rel=icon[...]</body></html>
Please let me know what I am doing wrong here? Is there any other way of doing it ?
Regards,
Here's another take on the IngressRoute, adapted to your environment.
I think 99% of the issue is actual route matching, especially if you say --api.insecure works.
Also as a rule of a thumb, logging & access log would help a lot in the DaemonSet definition.
- --log
- --log.level=DEBUG
- --accesslog
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik-ui.domain.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
kind: Rule
services:
- name: api#internal
kind: TraefikService
middlewares:
- name: traefik-basic-auth
tls:
secretName: traefik-ui-cert
Related
(invalid_token_response) An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: 401 Unauthorized: [no body]
I'm creating Microservices that are deployed in docker-desktop Kubernetes cluster for development. I'm using Spring security with Auth0 and the pods are using Kubernetes Native Service Discovery coupled with Spring cloud gateway. When I log in using Auth0, it authenticates just fine but the token that is received appears to be empty based on the error given. I'm new to Kubernetes and this error only seems to occur when running the application on the kubernetes cluster. If I use Eureka for local testing, Auth0 works completely fine. I've tried to do some research to see if the issue is the token unable to be retrieved in the kubernetes cluster and the only solution I've seem to be able to find is to implement istioctl within the cluster. FRONTEND deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: user-interface-app labels: app: user-interface-app spec: replicas: 1 selector: matchLabels: app: user-interface-app template: metadata: labels: app: user-interface-app spec: containers: - name: user-interface-app image: imageName:tag imagePullPolicy: Always ports: - containerPort: 8084 env: - name: GATEWAY_URL value: api-gateway-svc.default.svc.cluster.local - name: ZIPKIN_SERVER_URL valueFrom: configMapKeyRef: name: gateway-cm key: zipkin_service_url - name: STRIPE_API_KEY valueFrom: secretKeyRef: name: secret key: stripe-api-key - name: STRIPE_PUBLIC_KEY valueFrom: secretKeyRef: name: secret key: stripe-public-key - name: STRIPE_WEBHOOK_SECRET valueFrom: secretKeyRef: name: secret key: stripe-webhook-secret - name: AUTH_CLIENT_ID valueFrom: secretKeyRef: name: secret key: auth-client-id - name: AUTH_CLIENT_SECRET valueFrom: secretKeyRef: name: secret key: auth-client-secret --- apiVersion: v1 kind: Service metadata: name: user-interface-svc spec: selector: app: user-interface-app type: ClusterIP ports: - port: 8084 targetPort: 8084 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: user-interface-lb spec: selector: app: user-interface-app type: LoadBalancer ports: - name: frontend port: 8084 targetPort: 8084 protocol: TCP - name: request port: 80 targetPort: 8084 protocol: TCP API-GATEWAY deployment.yml apiVersion: v1 kind: ConfigMap metadata: name: gateway-cm data: cart_service_url: http://cart-service-svc.default.svc.cluster.local customer_profile_service_url: http://customer-profile-service-svc.default.svc.cluster.local order_service_url: http://order-service-svc.default.svc.cluster.local product_service_url: lb://product-service-svc.default.svc.cluster.local zipkin_service_url: http://zipkin-svc.default.svc.cluster.local:9411 --- apiVersion: apps/v1 kind: Deployment metadata: name: api-gateway-app labels: app: api-gateway-app spec: replicas: 1 selector: matchLabels: app: api-gateway-app template: metadata: labels: app: api-gateway-app spec: containers: - name: api-gateway-app image: imageName:imageTag imagePullPolicy: Always ports: - containerPort: 8090 env: - name: PRODUCT_SERVICE_URL valueFrom: configMapKeyRef: name: gateway-cm key: product_service_url --- apiVersion: v1 kind: Service metadata: name: api-gateway-np spec: selector: app: api-gateway-app type: NodePort ports: - port: 80 targetPort: 8090 protocol: TCP nodePort: 30499 --- apiVersion: v1 kind: Service metadata: name: api-gateway-svc spec: selector: app: api-gateway-app type: ClusterIP ports: - port: 80 targetPort: 8090 protocol: TCP
Minikube: Issue running Kibana
I am currently learning Kubernetes and I am using Minikube on MacOS using Docker Desktop, I am facing issues with running Kibana which seems to be failing to start and to also enable it through my nginx ingress controller. Regarding Kibana, it doesn't move to ready stage, it seems to be stuck and restarts several times. Everything lives inside the default namespace, except for fluentd that I use a persistent volume and persistent volume claim to access the shared /data/logs folder. I have added my fluentd, kibana, es and ingress yaml configuration. And also kibana logs below. Fluentd --- apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd rules: - apiGroups: - "" resources: - pods - namespaces verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd roleRef: kind: ClusterRole name: fluentd apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: fluentd namespace: kube-system --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: kube-system labels: k8s-app: fluentd-logging version: v1 spec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch env: - name: FLUENT_ELASTICSEARCH_HOST value: "elasticsearch" - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" - name: FLUENT_UID value: "0" resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - mountPath: /var/logs name: logs terminationGracePeriodSeconds: 30 volumes: - name: logs persistentVolumeClaim: claimName: chi-kube-pvc Kibana.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kibana spec: selector: matchLabels: run: kibana template: metadata: labels: run: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.14.2 readinessProbe: httpGet: path: /kibana port: 5601 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /kibana port: 5601 initialDelaySeconds: 15 periodSeconds: 20 env: - name: XPACK_SECURITY_ENABLED value: "true" - name: SERVER_BASEPATH value: "/kibana" ports: - containerPort: 5601 volumeMounts: - mountPath: /var/logs name: logs volumes: - name: logs persistentVolumeClaim: claimName: chi-pvc --- apiVersion: v1 kind: Service metadata: name: kibana labels: service: kibana spec: type: NodePort selector: run: kibana ports: - port: 5601 targetPort: 5601 Kibana logs: {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["info","plugins-service"],"pid":1216,"message":"Plugin \"metricsEntities\" is disabled."} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning."} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"Support for setting server.host to \"0\" in kibana.yml is deprecated and will be removed in Kibana version 8.0.0. Instead use \"0.0.0.0\" to bind to all interfaces."} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"plugins.scanDirs is deprecated and is no longer used"} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""} {"type":"log","#timestamp":"2021-09-22T09:54:47+00:00","tags":["warning","config","deprecation"],"pid":1216,"message":"\"xpack.reporting.roles\" is deprecated. Granting reporting privilege through a \"reporting_user\" role will not be supported starting in 8.0. Please set \"xpack.reporting.roles.enabled\" to \"false\" and grant reporting privileges to users using Kibana application privileges **Management > Security > Roles**."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","http","server","NotReady"],"pid":1216,"message":"http server running at http://0.0.0.0:5601"} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins-system"],"pid":1216,"message":"Setting up [106] plugins: [translations,taskManager,licensing,globalSearch,globalSearchProviders,banners,licenseApiGuard,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,screenshotMode,telemetry,newsfeed,mapsEms,mapsLegacy,legacyExport,kibanaLegacy,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,savedObjects,visualizations,visTypeXy,visTypeVislib,visTypeTimelion,features,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,presentationUtil,timelion,home,searchprofiler,painlessLab,grokdebugger,graph,visTypeVega,management,watcher,licenseManagement,indexPatternManagement,advancedSettings,discover,discoverEnhanced,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,savedObjectsManagement,spaces,security,transform,savedObjectsTagging,lens,reporting,canvas,lists,ingestPipelines,fileUpload,maps,dataVisualizer,encryptedSavedObjects,dataEnhanced,timelines,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,osquery,ml,cases,securitySolution,observability,uptime,infra,monitoring,logstash,console,apmOss,apm]"} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins","taskManager"],"pid":1216,"message":"TaskManager is identified by the Kibana UUID: 4f523c36-da1f-46e2-a071-84ee400bb9e7"} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","security","config"],"pid":1216,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","security","config"],"pid":1216,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","reporting","config"],"pid":1216,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","reporting","config"],"pid":1216,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.4.2105\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":1216,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","actions","actions"],"pid":1216,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["warning","plugins","alerting","plugins","alerting"],"pid":1216,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","#timestamp":"2021-09-22T09:54:48+00:00","tags":["info","plugins","ruleRegistry"],"pid":1216,"message":"Write is disabled, not installing assets"} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"Starting saved objects migrations"} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 226ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 192ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 118ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] INIT -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 536ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 86ms."} {"type":"log","#timestamp":"2021-09-22T09:54:49+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 86ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 64ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 112ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 49ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 29ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 106ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana] Migration completed after 840ms"} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 104ms."} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","savedobjects-service"],"pid":1216,"message":"[.kibana_task_manager] Migration completed after 869ms"} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","plugins-system"],"pid":1216,"message":"Starting [106] plugins: [translations,taskManager,licensing,globalSearch,globalSearchProviders,banners,licenseApiGuard,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,screenshotMode,telemetry,newsfeed,mapsEms,mapsLegacy,legacyExport,kibanaLegacy,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,savedObjects,visualizations,visTypeXy,visTypeVislib,visTypeTimelion,features,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,presentationUtil,timelion,home,searchprofiler,painlessLab,grokdebugger,graph,visTypeVega,management,watcher,licenseManagement,indexPatternManagement,advancedSettings,discover,discoverEnhanced,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,savedObjectsManagement,spaces,security,transform,savedObjectsTagging,lens,reporting,canvas,lists,ingestPipelines,fileUpload,maps,dataVisualizer,encryptedSavedObjects,dataEnhanced,timelines,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,osquery,ml,cases,securitySolution,observability,uptime,infra,monitoring,logstash,console,apmOss,apm]"} {"type":"log","#timestamp":"2021-09-22T09:54:50+00:00","tags":["info","plugins","monitoring","monitoring"],"pid":1216,"message":"config sourced from: production cluster"} {"type":"log","#timestamp":"2021-09-22T09:54:51+00:00","tags":["info","http","server","Kibana"],"pid":1216,"message":"http server running at http://0.0.0.0:5601"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1216,"message":"Starting monitoring stats collection"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","securitySolution"],"pid":1216,"message":"Dependent plugin setup complete - Starting ManifestTask"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","status"],"pid":1216,"message":"Kibana is now degraded"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["info","plugins","reporting"],"pid":1216,"message":"Browser executable: /usr/share/kibana/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell"} {"type":"log","#timestamp":"2021-09-22T09:54:52+00:00","tags":["warning","plugins","reporting"],"pid":1216,"message":"Enabling the Chromium sandbox provides an additional layer of protection."} {"type":"log","#timestamp":"2021-09-22T09:54:55+00:00","tags":["info","status"],"pid":1216,"message":"Kibana is now available (was degraded)"} {"type":"response","#timestamp":"2021-09-22T09:54:58+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":36,"contentLength":60},"message":"GET /kibana 404 36ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:08+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":25,"contentLength":60},"message":"GET /kibana 404 25ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:08+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":22,"contentLength":60},"message":"GET /kibana 404 22ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:18+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":26,"contentLength":60},"message":"GET /kibana 404 26ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:28+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":27,"contentLength":60},"message":"GET /kibana 404 27ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:28+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":22,"contentLength":60},"message":"GET /kibana 404 22ms - 60.0B"} {"type":"response","#timestamp":"2021-09-22T09:55:38+00:00","tags":[],"pid":1216,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"host":"172.17.0.3:5601","user-agent":"kube-probe/1.22","accept":"*/*","connection":"close"},"remoteAddress":"172.17.0.1","userAgent":"kube-probe/1.22"},"res":{"statusCode":404,"responseTime":25,"contentLength":60},"message":"GET /kibana 404 25ms - 60.0B"} Elasticsearch.yaml apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch spec: selector: matchLabels: component: elasticsearch template: metadata: labels: component: elasticsearch spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.14.2 env: - name: discovery.type value: single-node ports: - containerPort: 9200 protocol: TCP resources: limits: cpu: 2 memory: 4Gi requests: cpu: 500m memory: 4Gi volumeMounts: - mountPath: /var/logs name: logs volumes: - name: logs persistentVolumeClaim: claimName: chi-pvc --- apiVersion: v1 kind: Service metadata: name: elasticsearch labels: service: elasticsearch spec: type: NodePort selector: component: elasticsearch ports: - port: 9200 targetPort: 9200 Ingress-resource.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: chi-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /healthz pathType: Prefix backend: service: name: chi-svc port: number: 3000 - path: /kibana pathType: Prefix backend: service: name: kibana port: number: 5601 - path: /elasticsearch pathType: Prefix backend: service: name: elasticsearch port: number: 9200
I ended up sorting out the issue by having different ingresses and removing the Nginx rewrite-target annotation, I went one step ahead and created a special namespace for the logging infrastructure. Namespace.yaml apiVersion: v1 kind: Namespace metadata: name: kube-logging Persistent-volume.yaml apiVersion: v1 kind: PersistentVolume metadata: name: chi-pv labels: type: local spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteMany hostPath: path: /data/logs/ type: DirectoryOrCreate --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: chi-pvc spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 2Gi selector: matchLabels: type: local Elastic-search.yaml apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: kube-logging labels: k8s-app: elasticsearch version: v1 spec: selector: matchLabels: k8s-app: elasticsearch version: v1 template: metadata: labels: k8s-app: elasticsearch version: v1 spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0 env: - name: discovery.type value: single-node - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m" ports: - containerPort: 9200 resources: limits: cpu: 500m memory: 4Gi requests: cpu: 500m memory: 4Gi --- apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: kube-logging labels: k8s-app: elasticsearch version: v1 spec: type: NodePort selector: k8s-app: elasticsearch ports: - port: 9200 Fluentd.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: kube-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd rules: - apiGroups: - "" resources: - pods - namespaces verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd roleRef: kind: ClusterRole name: fluentd apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: fluentd namespace: kube-logging --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: kube-logging labels: k8s-app: fluentd-logging version: v1 spec: selector: matchLabels: k8s-app: fluentd-logging version: v1 template: metadata: labels: k8s-app: fluentd-logging version: v1 spec: serviceAccount: fluentd serviceAccountName: fluentd # Don't need this for Minikubes # tolerations: # - key: node-role.kubernetes.io/master # effect: NoSchedule containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch env: - name: FLUENT_ELASTICSEARCH_HOST # <hostname>.<namespace>.svc.cluster.local value: "elasticsearch.kube-logging.svc.cluster.local" - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" - name: FLUENTD_SYSTEMD_CONF value: 'disable' - name: FLUENT_LOGSTASH_FORMAT value: "true" # # X-Pack Authentication # # ===================== # - name: FLUENT_ELASTICSEARCH_USER # value: "elastic" # - name: FLUENT_ELASTICSEARCH_PASSWORD # value: "changeme" resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log # When actual pod logs in /var/lib/docker/containers, the following lines should be used. - name: dockercontainerlogdirectory mountPath: /var/lib/docker/containers readOnly: true # When actual pod logs in /var/log/pods, the following lines should be used. # - name: dockercontainerlogdirectory # mountPath: /var/log/pods # readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log # When actual pod logs in /var/lib/docker/containers, the following lines should be used. - name: dockercontainerlogdirectory hostPath: path: /var/lib/docker/containers # When actual pod logs in /var/log/pods, the following lines should be used. # - name: dockercontainerlogdirectory # hostPath: # path: /var/log/pods Kibana.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana version: v1 spec: selector: matchLabels: k8s-app: kibana version: v1 template: metadata: labels: k8s-app: kibana version: v1 spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.15.0 env: - name: SERVER_NAME value: kibana - name: SERVER_BASEPATH value: /kibana - name: SERVER_REWRITEBASEPATH value: "true" # - name: XPACK_SECURITY_ENABLED # value: "true" readinessProbe: httpGet: path: /kibana/api/status port: 5601 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /kibana/api/status port: 5601 initialDelaySeconds: 15 periodSeconds: 20 ports: - containerPort: 5601 --- apiVersion: v1 kind: Service metadata: name: kibana namespace: kube-logging labels: k8s-app: kibana version: v1 spec: type: NodePort selector: k8s-app: kibana ports: - port: 5601 targetPort: 5601 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kibana-ingress namespace: kube-logging spec: rules: - host: logging.com http: paths: - path: / pathType: Prefix backend: service: name: kibana port: number: 5601
Mariadb on kubernetes Got an error reading communication packets
I try to deploy an application with a mariadb database on my k8s cluster. This is the deployment i use: apiVersion: v1 kind: Service metadata: name: app-back labels: app: app-back namespace: dev spec: type: ClusterIP ports: - port: 8080 name: app-back selector: app: app-back --- apiVersion: v1 kind: Service metadata: name: app-db labels: app: app-db namespace: dev spec: type: ClusterIP clusterIP: None ports: - port: 3306 name: app-db selector: app: app-db --- apiVersion: v1 kind: ConfigMap metadata: name: mysql labels: app: mysql data: 60-server.cnf: | [mysqld] bind-address = 0.0.0.0 skip-name-resolve connect_timeout = 600 net_read_timeout = 600 net_write_timeout = 600 max_allowed_packet = 256M default-time-zone = +00:00 --- apiVersion: apps/v1 kind: Deployment metadata: name: app-db namespace: dev spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: app-db template: metadata: labels: app: app-db spec: containers: - name: app-db image: mariadb:10.5.8 env: - name: MYSQL_DATABASE value: app - name: MYSQL_USER value: app - name: MYSQL_PASSWORD value: app - name: MYSQL_RANDOM_ROOT_PASSWORD value: "true" ports: - containerPort: 3306 name: app-db resources: requests: memory: "200Mi" cpu: "100m" limits: memory: "400Mi" cpu: "200m" volumeMounts: - name: config-volume mountPath: /etc/mysql/conf.d volumes: - name: config-volume configMap: name: mysql --- apiVersion: apps/v1 kind: Deployment metadata: name: app-back namespace: dev spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: app-back template: metadata: labels: app: app-back spec: containers: - name: app-back image: private-repository/app/app-back:latest env: - name: spring.profiles.active value: dev - name: DB_HOST value: app-db - name: DB_PORT value: "3306" - name: DB_NAME value: app - name: DB_USER value: app - name: DB_PASSWORD value: app ports: - containerPort: 8080 name: app-back resources: requests: memory: "200Mi" cpu: "100m" limits: memory: "200Mi" cpu: "400m" imagePullSecrets: - name: docker-private-credentials When i run this, the mariadb container log the following warning : 2020-12-03 8:23:41 28 [Warning] Aborted connection 28 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets) 2020-12-03 8:23:41 25 [Warning] Aborted connection 25 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets) 2020-12-03 8:23:41 31 [Warning] Aborted connection 31 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets) 2020-12-03 8:23:41 29 [Warning] Aborted connection 29 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets) ... My app is stuck on trying to connect to the database. The application is a Sprinbboot application build with this dockerfile: FROM maven:3-adoptopenjdk-8 AS builder WORKDIR /usr/src/mymaven/ COPY . . RUN mvn clean package -e -s settings.xml -DskipTests FROM tomcat:9-jdk8-adoptopenjdk-hotspot ENV spring.profiles.active=dev ENV DB_HOST=localhost ENV DB_PORT=3306 ENV DB_NAME=app ENV DB_USER=app ENV DB_PASSWORD=app COPY --from=builder /usr/src/mymaven/target/app.war /usr/local/tomcat/webapps/ Any idea?
Ok, i found the solution. This was not an error of mariadb. This is due to apache that break the connection if running inside a container with to low memory. Set memory limit to 1500Mi solved the problem.
Istio Traffic Shifting not working in Internal communication
I deployed Istion on my local Kubernetes cluster running in my Mac. I created this VirtualService, DestinationRule and Gateway. apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: code-gateway namespace: code spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "gateway.code" apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: codemaster namespace: code spec: hosts: - master.code - codemaster gateways: - codemaster-gateway - code-gateway http: - route: - destination: host: codemaster subset: v1 apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: codemaster-gateway namespace: code spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "master.code" apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: codemaster namespace: code spec: host: codemaster trafficPolicy: connectionPool: tcp: maxConnections: 100 subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 - apiVersion: "v1" kind: "Service" metadata: labels: app: "codemaster" group: "code" name: "codemaster" namespace: "code" spec: ports: - name: http-web port: 80 targetPort: 80 selector: app: "codemaster" group: "code" type: "ClusterIP" - apiVersion: "apps/v1" kind: "Deployment" metadata: labels: app: "codemaster" group: "code" env: "production" name: "codemaster" namespace: "code" spec: replicas: 2 selector: matchLabels: app: "codemaster" group: "code" template: metadata: labels: app: "codemaster" version: "v1" group: "code" env: "production" spec: containers: - env: - name: "KUBERNETES_NAMESPACE" valueFrom: fieldRef: fieldPath: "metadata.namespace" - name: "SPRING_DATASOURCE_URL" value: "jdbc:postgresql://host.docker.internal:5432/code_master" - name: "SPRING_DATASOURCE_USERNAME" value: "postgres" - name: "SPRING_DATASOURCE_PASSWORD" value: "postgres" image: "kzone/code/codemaster:1.0.0" imagePullPolicy: "IfNotPresent" name: "codemaster" ports: - containerPort: 80 name: "http" protocol: "TCP" apiVersion: "v1" kind: "List" items: - apiVersion: "apps/v1" kind: "Deployment" metadata: labels: app: "codemaster" group: "code" env: "canary" name: "codemaster-canary" namespace: "code" spec: replicas: 1 selector: matchLabels: app: "codemaster" group: "code" template: metadata: labels: app: "codemaster" version: "v2" group: "code" env: "canary" spec: containers: - env: - name: "KUBERNETES_NAMESPACE" valueFrom: fieldRef: fieldPath: "metadata.namespace" - name: "SPRING_DATASOURCE_URL" value: "jdbc:postgresql://host.docker.internal:5432/code_master" - name: "SPRING_DATASOURCE_USERNAME" value: "postgres" - name: "SPRING_DATASOURCE_PASSWORD" value: "postgres" image: "kzone/code/codemaster:1.0.1" imagePullPolicy: "IfNotPresent" name: "codemaster" ports: - containerPort: 80 name: "http" protocol: "TCP" These are the services running in code namespace, codemaster ClusterIP 10.103.151.80 <none> 80/TCP 18h gateway ClusterIP 10.104.154.57 <none> 80/TCP 18h I deployed 2 spring-boot microservices in ton k8s. One is a spring-boot gateway. These are the pods running in code namespace, codemaster-6cb7b8ddf5-mlpzn 2/2 Running 0 7h3m codemaster-6cb7b8ddf5-sgzt8 2/2 Running 0 7h3m codemaster-canary-756697d9c8-22qb2 2/2 Running 0 7h3m gateway-5b5c8697f4-jpb4q 2/2 Running 0 7h3m When I send a request to http://master.code/version(the gateway created for codemaster service) it always goes to the correct subset. But when I send a request via spring-boot gateway (http://gateway.code/codemaster/version) request doesn't go to subset v1 only, requests go in round-robin to all the 3 pods. This is what I see in Kiali, I want to apply traffic shifting between the gateway and other services.
Istio relies on the Host header of a request to apply the traffic rules. Since you are using spring boot gateway to make the request ribbon hits the pod IP directly instead of hitting the service. So to avoid it provide static server list to the route /version as http://master.code.cluster.local in your spring boot gateway config -> to avoid ribbon dynamic endpoint discovery. This should fix the problem.
After doing some search I found that there is no CNI in Docker for Mac k8s. Because of that traffic shifting doesn't not work on Docker for Mac K8s
Istio - GKE - gRPC config stream closed; upstream connect error or disconnect/reset before headers. reset reason: connection failure
I am trying to my spring boot micro service in GKE Cluster with istio 1.1.5 latest version as of now. It throws error and pod never spins up. If I run it as a separate service in Kubernetes engine it works perfectly but with isito, it does not work. The purpose for using istio is to host multiple microservices and to use the feature istio provides. Here is my yaml file: apiVersion: apps/v1beta1 kind: Deployment metadata: name: revenue spec: replicas: 1 template: metadata: labels: app: revenue-serv tier: backend track: stable spec: containers: - name: backend image: "gcr.io/finacials/revenue-serv:latest" imagePullPolicy: Always ports: - containerPort: 8081 livenessProbe: httpGet: path: / port: 8081 initialDelaySeconds: 15 timeoutSeconds: 30 readinessProbe: httpGet: path: / port: 8081 initialDelaySeconds: 15 timeoutSeconds: 30 --- apiVersion: v1 kind: Service metadata: name: revenue-serv spec: ports: - port: 8081 #targetPort: 8081 #protocol: TCP name: http selector: app: revenue-serv tier: backend type: LoadBalancer --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway annotations: kubernetes.io/ingress.class: "istio" spec: rules: - http: paths: - path: /revenue/.* backend: serviceName: revenue-serv servicePort: 8081 Thanks for your valuable feedback.
I have found the issue. I removed readynessProbe and livenessProbe and created ingressgateway and virtual service. It worked. deployment & service: ######################################################################################### # This is for deployment - Service & Deployment in Kubernetes ################ # Author: Arindam Banerjee ################ ######################################################################################### apiVersion: apps/v1beta1 kind: Deployment metadata: name: revenue-serv namespace: dev spec: replicas: 1 template: metadata: labels: app: revenue-serv version: v1 spec: containers: - name: revenue-serv image: "eu.gcr.io/rcup-mza-dev/revenue-serv:latest" imagePullPolicy: Always ports: - containerPort: 8081 --- apiVersion: v1 kind: Service metadata: name: revenue-serv namespace: dev spec: ports: - port: 8081 name: http selector: app: revenue-serv gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: worldcup-serv-gateway namespace: dev spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" virtual-service.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: revenue-serv-virtualservice namespace: dev spec: hosts: - "*" gateways: - revenue-serv-gateway http: - route: - destination: host: revenue-serv