How to rewrite in Istio the right way and avoid 404 error? - url-rewriting

Scenario-
I have 2 deployments deployment-1 with label- version:v1 and deployment-2 with label- version:v2 both are hosted under a nodeport service- test-1. I have created a virtual service with two match conditions as follows
- match:
- uri:
exact: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
exact: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
The code file can be found here
Problem-
When I try to visit this Ingress Gateway IP at http://ingress-gateway-ip/v1/favicon.ico, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico not found (because this has been re-written to "/") also the stylings and js are absent at this route. But when I try to visit
http://ingress-gateway-ip/v1/favicon.ico I can see the favicon icon along with all the js and stylings.
Please find the screenshots of the problem here
Expectation-
How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up?
EDIT-1:
Added code snippet from the original code
Added code file link
EDIT-2:
Added screenshot of the problem
Modified problem statement for clear understanding

How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up
I assume your issue is your DestinationRule, in the v2 name your label is version: v1 and it should be version: v2, that's why your requests from /v1 and /v2 went only to v1 version of your pod.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v1 <---
It should be
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
When I try to visit this Ingress Gateway IP, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico
It's working as designed, you haven't specified path for /, just for /v1 and /v2.
If you want to be able to access then you would have to add another match for /
- match:
- uri:
prefix: /
route:
- destination:
host: test-1
There is working example with 2 nginx pods, take a look.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
selector:
matchLabels:
version: v1
replicas: 1
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v2
spec:
selector:
matchLabels:
version: v2
replicas: 1
template:
metadata:
labels:
app: frontend
version: v2
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: test-1
labels:
app: frontend
spec:
ports:
- name: http-front
port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: simpleexample
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-virtualservice
spec:
gateways:
- simpleexample
hosts:
- '*'
http:
- match:
- uri:
prefix: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
prefix: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Results from curl:
curl -v ingress-gateway-ip/
404 Not Found
there is no path specified for that in virtual service
curl -v ingress-gateway-ip/v1
HTTP/1.1 200 OK
Hello nginx1
curl -v ingress-gateway-ip/v2
HTTP/1.1 200 OK
Hello nginx2
EDIT
the problem is that all the stylings and js are not readable by the browser at "/" when they are being re-written
It was already explained by #Rinor here
I would add this Istio in practise tutorial here, it explains well a way of dealing with that problem, which is to add more paths for your dependencies(js,css,etc).
Let’s break down the requests that should be routed to Frontend:
Exact path / should be routed to Frontend to get the Index.html
Prefix path /static/* should be routed to Frontend to get any static files needed by the frontend, like Cascading Style Sheets and JavaScript files.
Paths matching the regex ^.*.(ico|png|jpg)$ should be routed to Frontend as it is an image, that the page needs to show.
http:
- match:
- uri:
exact: /
- uri:
exact: /callback
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: frontend
port:
number: 80
Let me know if you have any more questions.

Related

yq replace value keep rest intact

I want just to replace a value in a multi .yaml file with yq and leaving the rest of the document intact. So the question is how do i replace the value of the image key only when the above key value contains name: node-red i have tried:
yq '.spec.template.spec.containers[] | select(.name = "node-red") | .image |= "test"' ./test.yaml`
with this yaml:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "cir.domain.com/node-red:84"
the output is:
name: node-red
image: "test"
What I want is a yaml output with only a updated value in the image key like this:
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: node-red
spec:
ingressClassName: nginx-private
rules:
- host: node-red.k8s.lan
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: node-red
port:
number: 1880
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: "test"
Keep the context by including the traversal and selection into the LHS of the assignment/update, ie. putting parentheses around all of it:
yq '(.spec.template.spec.containers[] | select(.name == "node-red") | .image) |= "test"'
In order to prevent the creation of elements previously not present in one of the documents, add filters using select accordingly, e.g.:
yq '(.spec | select(.template).template.spec.containers[] | select(.name == "node-red").image) |= "test"'
Note that in any case it should read == not = when checking for equality.

How Can I Publish Kibana

I have a question if possible:
(I work with GKE)
this it my kibana:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.6.2
count: 1
elasticsearchRef:
name: cdbridgerpayelasticsearch
http:
service:
spec:
type: LoadBalancer
(It ran well with the loadbalancer - ...in the browser )
And I wanted to make it publish on https://some.thing.net
so I made an ingress:
apiVersion: v1
kind: Secret
metadata:
name: bcd-kibana-testsecret-tls
namespace: default
type: kubernetes.io/tls
data:
tls.crt: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREVENDQWZXZ0F3SUJBZ0lVVkk1ellBakR0
RWFyd0Zqd2xuTnlLeU0xMnNrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZqRVVNQklHQTFVRUF3d0xa
bTl2TG1KaGNpNWpiMjB3SGhjTk1qSXdOakUxTVRJeE16RTVXaGNOTWpNdwpOakUxTVRJeE16RTVX
akFXTVJRd0VnWURWUVFEREF0bWIyOHVZbUZ5TG1OdmJUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJC
UUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLMVRuTDJpZlluTUFtWEkrVmpXQU9OSjUvMGlpN0xWZlFJMjQ5
aVoKUXl5Zkw5Y2FQVVpuODZtY1NSdXZnV1JZa3dLaHUwQXpqWW9aQWR5N21TQ3ROMmkzZUZIRGdq
NzdZNEhQcFA1TQpKVTRzQmU5ZmswcnlVeDA3aU11UVA5T3pibGVzNHcvejJIaXcyYVA2cUl5ZFI3
bFhGSnQ0NXNSeDJ3THRqUHZCClErMG5UQlJrSndsQVZQYTdIYTN3RjBTSDJXL1dybTgrNlRkVGpG
MmFUanpkMFFXdy9hKzNoUU9HSnh4b1JxY1MKYmNNYmMrYitGeCtqZ3F2N0xuQ3R1Njd5L2tsb3ZL
Z2djdW41ZVNqY3krT0ZpdTNhY2hLVDlUeWJlSUxPY2FSYQp3KzJSeitNOUdTMWR2aUI0Q0dRNnlw
RDhkazc4bktja1FBam9QMXV5ZXJnR2hla0NBd0VBQWFOVE1GRXdIUVlEClZSME9CQllFRkpLUWps
KzI0TVJrNVpqTlB4ZVRnVU1pbE5xWk1COEdBMVVkSXdRWU1CYUFGSktRamwrMjRNUmsKNVpqTlB4
ZVRnVU1pbE5xWk1BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dF
QgpBSUp2Y1ZNclpEUEZ6TEhvZ3IyZklDN0E0TTB5WFREZXhONWNEZFFiOUNzVk0zUjN6bkZFU1Jt
b21RVVlCeFB3CmFjUVpWQ25qM0xGamRmeExBNkxrR0hhbjBVRjhDWnJ4ODRRWUtzQTU2dFpJWFVm
ZXRIZk1zOTZsSE5ROW5samsKT3RoazU3ZkNRZVRFMjRCU0RIVDJVL1hhNjVuMnBjcFpDU2FYWStF
SjJaWTBhZjlCcVBrTFZud3RTQ05lY0JLVQp3N0RCM0o4U2h1Z0FES21xU2VJM1R2N015SThvNHJr
RStBdmZGTUw0YlpFbS9IWW4wNkxQdVF3TUE1cndBcFN3CnlDaUJhcjRtK0psSzNudDRqU0hGeU4x
N1g1M1FXcnozQTNPYmZSWXI0WmJObE8zY29ObzFiQnd3eWJVVmgycXoKRGIrcnVWTUN5WjBTdXlE
OGZURFRHY1E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: |
LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZB
QVNDQktjd2dnU2pBZ0VBQW9JQkFRQ3RVNXk5b24ySnpBSmwKeVBsWTFnRGpTZWY5SW91eTFYMENO
dVBZbVVNc255L1hHajFHWi9PcG5Fa2JyNEZrV0pNQ29idEFNNDJLR1FIYwp1NWtnclRkb3QzaFJ3
NEkrKzJPQno2VCtUQ1ZPTEFYdlg1Tks4bE1kTzRqTGtEL1RzMjVYck9NUDg5aDRzTm1qCitxaU1u
VWU1VnhTYmVPYkVjZHNDN1l6N3dVUHRKMHdVWkNjSlFGVDJ1eDJ0OEJkRWg5bHYxcTV2UHVrM1U0
eGQKbWs0ODNkRUZzUDJ2dDRVRGhpY2NhRWFuRW0zREczUG0vaGNmbzRLcit5NXdyYnV1OHY1SmFM
eW9JSExwK1hrbwozTXZqaFlydDJuSVNrL1U4bTNpQ3puR2tXc1B0a2MvalBSa3RYYjRnZUFoa09z
cVEvSFpPL0p5bkpFQUk2RDliCnNucTRCb1hwQWdNQkFBRUNnZ0VCQUpyb3FLUy83aTFTM1MyMVVr
MTRickMxSkJjVVlnRENWNGk4SUNVOHpWRzcKTUZteVJPT0JFc0FiUXlmd1V0ZXBaaktxODUwc3Rp
cWZzUTlqeHpieU9SeHBKYXNGN29sMXluaUJhYmd4dkFIQwp6TWNsQjVLclEyZFVCeTNRVFl0YXla
cW9sUU56NzV2bWk0M0lBQTQwbjU3aFdqU2QrTG5IL0hNQWRzbW04SnVwCjA5d3VHMGltdEhQYm5B
TERnY2N3V04zY2xxU0FtR3pxbW9kOEE1YjBWZTQwZHhjTXh3TldaN0JqOTBnamNWYnQKVU1aaFhp
T2E2bnNSSHZDQjF0b0lmZVBuOEdzRk9nOUVqUHZDTWM4QWZKVW84Qk5TK2N0RmF3RjRWaUUyUHlB
VgpxZzR1MHhBQ2Qya28zUUtpZFpsbjAyZkc2ZWg1UmxGYzdsL1RiWWxQY2xFQ2dZRUEyU0NvK2pJ
MUZ1WkZjSFhtCm1Ra2Z3Q0Vvc28xd3VTV3hRWDBCMnJmUmJVbS9xdXV4Rm92MUdVc0hwNEhmSHJu
RWRuSklqVUw4bWxpSjFFUTkKVUpxZC9SVkg1OTdZZ2Zib1dCbHh0cVhIRFRObjNIU3JzQmJlQTh6
NXFMZjE2QzZaa3U3YmR3L2pxazJVaFgrcwp6T3piYTVqYVVYU0pXYXpzRmZoZjdSMlZ3UTBDZ1lF
QXpGdDZBTDNYendBSi9EcXU5QlVuaTIxemZXSUp5QXFwCnBwRnNnQUczVkVYZWFRMjVGcjV6cURn
dlBWRkF5QUFYMU9TL3pHZVcxaDQ0SERzQjRrVmdxVlhHSTdoQUV1RjYKRlgra1M5Uk5QdmFsYXdQ
cXp4VTdPQmcvUis5NVB1NW1oNnFrRWVUekM2T21ZUGRGbmVQcWxKZk03YU43OEhjOApGVU1xRTBa
NkNVMENnWUJJUUVYNmU1cU85REZIS3ZTQkdEZ29odUEwQ2p6b1gxS01xRHhsdTZWRTZMV08rcjhD
CjhhK3Rxdm54RTVaYmN4V2RGSXB2OTBwM1VkOExjMm16MkwrWjUrcjFqWUllUFRzemxjUHhNMWo1
VzVIRUdrN0gKV2RTbkR4NUV0bkp0d0pQNkFPR216UExGU091VFFOa1BtQUdyM0VGSnVhMjYyWC8y
RDZCY0Z1d3VRUUtCZ0V0aApHcm1YVFRsdnZEOHJya2tlWEgzVG01d09RNmxrTlh2WmZIb2pKK3FQ
OHlBeERhclVDWGx0Y0E5Z0gxTW1wYVBECjFQT2k2a0tFMXhHaXVta3FTaU5zSGpBaTBJK21XQkFD
Q3lwbFh6RHdiY2Z4by9WSzBaTTVibTRzYVQ3TFZVcUoKcVFkb3VqWDY0VzQzQjVqYjd6VnNZUXp2
RnRKMlNOVlc5dmd4TU9hcEFvR0FDaHphN3BMWWhueFA5QWVUL1pxZwpvUWM0ckh1SDZhTDMveCtq
dlpRVXdiSitKOWZGY05pcTFZRlJBL2RJSEJWcGZTMWJWR3N3OW9MT2tsTWxDckxRCnJKSjkzWlRu
dFdSQ1FCOTNMbXhoZmJuRk9CemZtbHZYTjFJeE1FNStVbVZRQmRLaS9YMktMZnFSUW5HVTExL2UK
NytFcXFFbllrWTBraE1HL0xqRlRTU0E9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-bcd-ingress
spec:
tls:
- hosts:
- some.thing.net
secretName: bcd-kibana-testsecret-tls
rules:
- host: some.thing.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port:
number: 5601
(p.s. to get the tls.crt and the tls.key I ran in the GCP cli the command:
kubectl create secret tls bcdsecret --key="tls.key" --cert="tls.crt"
and then:
cat tls.crt | base64 and cat tls.key | base64)
But the ingress failed
when I ran kubectl get ingress I got:
$ kubectl get ingress
W0615 21:28:17.715766 604 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME CLASS HOSTS ADDRESS PORTS AGE
...
tls-bcd-ingressbcd some.thing.net 34.120.185.85 80, 443 35m
but when I ran this hostname in the browser I got nothing
What am I supposed to do?
Thank you!
or in other words:
how can I publish the kibana on hostname?
thanks :smiling_face:
Frida

Can't communicate with pods through services

I have two deployment, where one of them creates 4 replica for php-fpm and another is a nginx webserver exposed to Internet through Ingress.
problem is that I can't connect to app service in webserver pod! (same issue while trying to connect to other services)
ping result:
$ ping -c4 app.ternobo-connect
PING app.ternobo-connect (10.245.240.225): 56 data bytes
--- app.ternobo-connect ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
but pods are individually available with their ClusterIP.
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
ternobo.kubernates.service: app
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
replicas: 4
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
selector:
matchLabels:
ternobo.kubernates.service: app
template:
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
ternobo.kubernates.service: app
spec:
containers:
- env:
- name: SERVICE_NAME
value: app
- name: SERVICE_TAGS
value: production
image: ghcr.io/ternobo/ternobo-connect:0.1.01
name: app
ports:
- containerPort: 9000
resources: {}
tty: true
workingDir: /var/www
envFrom:
- configMapRef:
name: appenvconfig
imagePullSecrets:
- name: regsecret
restartPolicy: Always
status: {}
app-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
type: ClusterIP
ports:
- name: "9000"
port: 9000
targetPort: 9000
selector:
ternobo.kubernates.service: app
status:
loadBalancer: {}
network-policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network
namespace: ternobo-connect
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
I also tried to removing netwok policy and but it didn't work! and change podSelector rules to only select services with ternobo.kubernates.network/app-network: "true" label.
Kubernetes services urls are in my-svc.my-namespace.svc.cluster-domain.example format, see: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records
So the ping should be
ping -c4 app.ternobo-connect.svc.cluster.local
If the webserver is in the same namespace as the service you can ping the service name directly
ping -c4 app
I don't know the impact of network policy, I haven't worked with it.

yq add extra contents to yaml

I have tried many times with different combinations, but I cant get it working. Here is my yq command
yq -i e '(.spec.template.spec.containers[]|select(.name == "od-fe").image) = "abcd"
it is supposed to replace the deployment image which is successful, but it also adds template.spec.containers to the service. here is the deployment + service yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: od
name: od-fe
spec:
replicas: 2
selector:
matchLabels:
app: od-fe
template:
metadata:
labels:
app: od-fe
spec:
containers:
- name: od-fe
image: od-frontend:latest. <<<replace here only
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
namespace: od
name: od-fe-service
labels:
run: od-fe-service
spec:
ports:
- port: 3000
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: od-fe
now the issue is service also get changed to become
apiVersion: v1
kind: Service
metadata:
namespace: od
name: od-fe-service
labels:
run: od-fe-service
spec:
ports:
- port: 3000
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: od-fe
template:
spec:
containers: []
One way to fix that would be include a select statement at the top level to act on only Deployment type
yq e '(select(.kind == "Deployment").spec.template.spec.containers[]|select(.name == "od-fe").image) |= "abcd"' yaml
Note: If you are using yq version 4.18.1 or beyond, the eval flag e is no longer needed as it has been made the default action.

Cannot connect external Oracle database from inside Kubernetes

I'm deploying an App on k8s. But I cannot connect oracle from external machine.
I tried to connect directly via DBIP or try to connect via Serice Enpoint, but it doesn't work.
Please help me to solve that.
Here is database Info
Database Ip: 192.168.1.25
Port: 1521
Here is service.yaml
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- port: 1521
targetPort: 1521
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: mydb
subsets:
- addresses:
- ip: 192.168.1.25
ports:
- port: 1521
And connectionStr is
"User ID=test;Password=pwd;Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mydb)(PORT=1521)))(CONNECT_DATA=(SID=ORCLCDB)));";
Here is app_deloyment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: apptest
spec:
selector:
matchLabels:
run: web
replicas: 1
template:
metadata:
labels:
run: web
spec:
containers:
- name: app-web
image: app-web
imagePullPolicy: IfNotPresent
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Staging"
volumeMounts:
- name: app-web-log
mountPath: /app/Log
volumes:
- name: app-web-log
hostPath:
path: /log
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: web-svc
namespace: apptest
labels:
run: web
spec:
type: NodePort
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
nodePort: 30200
selector:
run: web
The output when I run cmd kubectl logs core-dns-66bff467f8-htn5w -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:37732->8.8.8.8:53: read: no route to host
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:41292->8.8.8.8:53: read: no route to host
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:36947->8.8.8.8:53: read: no route to host

Resources