Acessing an external proxy from the Istio egress gateway - proxy

I'd need to tightly control all traffic to external sites from the applications in the K8S namespace. As the K8S NetworkPolicy objects allow specifying target IP addresses only we'd prefer using Istio to mange the outgoing traffic so that we can use hostnames instead of CIDRs to configure our external services. Furthermore we have an enterprise wide proxy which must be used for all traffic to the internet.
Following https://istio.io/latest/docs/tasks/traffic-management/egress/http-proxy/ we could manage that the sidecar of the pod (with the proper environment variables HTTP_PROXY etc. set) can access the internet via the corporate proxy. This means that the communication POD --> sidecar --> proxy --> external site works. However in this case the Istio egress gateway is bypassed.
What we'd however need is the following communication path: POD --> sidecar --> Istio egress gateway --> proxy --> external site.
Out current setup is the following:
The PODs have the HTTP_PROXY env. variable set to proxy.int.xxx.zz:8080
We have the following yamls applied:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: proxylb
spec:
hosts:
- proxy.int.xxx.zz
ports:
- number: 8080
name: tcp
protocol: TCP
location: MESH_EXTERNAL
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: cnn
spec:
hosts:
- edition.cnn.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: tls
protocol: TLS
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: orf
spec:
hosts:
- www.orf.at
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: tls
protocol: TLS
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- edition.cnn.com
- www.orf.at
- port:
number: 443
name: tls
protocol: TLS
hosts:
- edition.cnn.com
- www.orf.at
tls:
mode: PASSTHROUGH
- port:
number: 8080
name: tcp
protocol: TCP
hosts:
- proxy.int.xxx.zz
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: istio-egressgateway
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: cnn
- name: orf
- name: proxylb
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- edition.cnn.com
gateways:
- mesh
- istio-egressgateway
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- edition.cnn.com
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: cnn
port:
number: 443
- match:
- gateways:
- istio-egressgateway
port: 443
sniHosts:
- edition.cnn.com
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: cnn
port:
number: 80
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: edition.cnn.com
port:
number: 80
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-orf-through-egress-gateway
spec:
hosts:
- www.orf.at
gateways:
- mesh
- istio-egressgateway
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- www.orf.at
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: orf
port:
number: 443
- match:
- gateways:
- istio-egressgateway
port: 443
sniHosts:
- www.orf.at
route:
- destination:
host: www.orf.at
port:
number: 443
weight: 100
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: orf
port:
number: 80
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: www.orf.at
port:
number: 80
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-proxylb-through-egress-gateway
spec:
hosts:
- proxy.int.xxx.zz
gateways:
- mesh
- istio-egressgateway
tcp:
- match:
- gateways:
- mesh
port: 8080
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: proxylb
port:
number: 8080
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 8080
route:
- destination:
host: proxy.int.xxx.zz
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: trilateral
spec:
egress:
- hosts:
- "./*"
outboundTrafficPolicy:
mode: REGISTRY_ONLY
However when running a curl we get:
curl -k -I https://istio.io
curl: (56) Recv failure: Connection reset by peer
Should this setup work? What is missing?
Thanks a lot in advance for any hint.

Related

Can't communicate with pods through services

I have two deployment, where one of them creates 4 replica for php-fpm and another is a nginx webserver exposed to Internet through Ingress.
problem is that I can't connect to app service in webserver pod! (same issue while trying to connect to other services)
ping result:
$ ping -c4 app.ternobo-connect
PING app.ternobo-connect (10.245.240.225): 56 data bytes
--- app.ternobo-connect ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
but pods are individually available with their ClusterIP.
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
ternobo.kubernates.service: app
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
replicas: 4
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
selector:
matchLabels:
ternobo.kubernates.service: app
template:
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
ternobo.kubernates.service: app
spec:
containers:
- env:
- name: SERVICE_NAME
value: app
- name: SERVICE_TAGS
value: production
image: ghcr.io/ternobo/ternobo-connect:0.1.01
name: app
ports:
- containerPort: 9000
resources: {}
tty: true
workingDir: /var/www
envFrom:
- configMapRef:
name: appenvconfig
imagePullSecrets:
- name: regsecret
restartPolicy: Always
status: {}
app-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
ternobo.kubernates.network/app-network: "true"
name: app
namespace: ternobo-connect
spec:
type: ClusterIP
ports:
- name: "9000"
port: 9000
targetPort: 9000
selector:
ternobo.kubernates.service: app
status:
loadBalancer: {}
network-policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network
namespace: ternobo-connect
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
I also tried to removing netwok policy and but it didn't work! and change podSelector rules to only select services with ternobo.kubernates.network/app-network: "true" label.
Kubernetes services urls are in my-svc.my-namespace.svc.cluster-domain.example format, see: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records
So the ping should be
ping -c4 app.ternobo-connect.svc.cluster.local
If the webserver is in the same namespace as the service you can ping the service name directly
ping -c4 app
I don't know the impact of network policy, I haven't worked with it.

NSQ cluster in Kubernetes

I'm trying to set up an NSQ cluster in Kubernetes and having issues.
Basically, I want to scale out NSQ and NSQ Lookup. I have a stateful set(2 nodes) definition for both of them. To not post the whole YAML file, I'll post only part of it for NSQ
NSQ container template
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd.default.svc.cluster.local:4160
here nsqlookupd.default.svc.cluster.local is a K8s headless service, by doing this I'm expecting the NSQ instance to open a connection with all of the NSQ Lookup instances, which in fact is not happening. It just opens a connection with a random one. However, if I explicitly list all of the NSQ Lookup hosts like this, it works.
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
I also wanted to use the headless service DNS name in --broadcast-address for both NSQ and NSQ Lookup but that doesn't work as well.
I'm using nsqio go library for Publishing and Consuming messages, and it looks like I can't use headless service there as well and should explicitly list NSQ/NSQ Lookup pod names when initializing a consumer or publisher.
Am I using this in the wrong way?
I mean I want to have horizontally scaled NSQ and NSQLookup instances and not hardcode the addresses.
you could use statfulset and headless service to achieve this goal
Try using the below config yaml to deploy it into the K8s cluster. Or else you should checkout the Official Helm : https://github.com/nsqio/helm-chart
Using headless service you can discover all pod IPs.
apiVersion: v1
kind: Service
metadata:
name: nsqlookupd
labels:
app: nsq
spec:
ports:
- port: 4160
targetPort: 4160
name: tcp
- port: 4161
targetPort: 4161
name: http
publishNotReadyAddresses: true
clusterIP: None
selector:
app: nsq
component: nsqlookupd
---
apiVersion: v1
kind: Service
metadata:
name: nsqd
labels:
app: nsq
spec:
ports:
- port: 4150
targetPort: 4150
name: tcp
- port: 4151
targetPort: 4151
name: http
clusterIP: None
selector:
app: nsq
component: nsqd
---
apiVersion: v1
kind: Service
metadata:
name: nsqadmin
labels:
app: nsq
spec:
ports:
- port: 4170
targetPort: 4170
name: tcp
- port: 4171
targetPort: 4171
name: http
selector:
app: nsq
component: nsqadmin
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqlookupd
spec:
serviceName: "nsqlookupd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqlookupd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqlookupd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqlookupd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4160
name: tcp
- containerPort: 4161
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
command:
- /nsqlookupd
terminationGracePeriodSeconds: 5
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nsqd
spec:
serviceName: "nsqd"
replicas: 3
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: nsq
component: nsqd
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nsq
- key: component
operator: In
values:
- nsqd
topologyKey: "kubernetes.io/hostname"
containers:
- name: nsqd
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4150
name: tcp
- containerPort: 4151
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 2
volumeMounts:
- name: datadir
mountPath: /data
command:
- /nsqd
- -data-path
- /data
- -lookupd-tcp-address
- nsqlookupd-0.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-1.nsqlookupd:4160
- -lookupd-tcp-address
- nsqlookupd-2.nsqlookupd:4160
- -broadcast-address
- $(HOSTNAME).nsqd
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: ssd
resources:
requests:
storage: 1Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nsqadmin
spec:
replicas: 1
template:
metadata:
labels:
app: nsq
component: nsqadmin
spec:
containers:
- name: nsqadmin
image: nsqio/nsq:v1.1.0
imagePullPolicy: Always
resources:
requests:
cpu: 30m
memory: 64Mi
ports:
- containerPort: 4170
name: tcp
- containerPort: 4171
name: http
livenessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ping
port: http
initialDelaySeconds: 5
command:
- /nsqadmin
- -lookupd-http-address
- nsqlookupd-0.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-1.nsqlookupd:4161
- -lookupd-http-address
- nsqlookupd-2.nsqlookupd:4161
terminationGracePeriodSeconds: 5
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nsq
spec:
rules:
- host: nsq.example.com
http:
paths:
- path: /
backend:
serviceName: nsqadmin
servicePort: 4171

Cannot connect external Oracle database from inside Kubernetes

I'm deploying an App on k8s. But I cannot connect oracle from external machine.
I tried to connect directly via DBIP or try to connect via Serice Enpoint, but it doesn't work.
Please help me to solve that.
Here is database Info
Database Ip: 192.168.1.25
Port: 1521
Here is service.yaml
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- port: 1521
targetPort: 1521
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: mydb
subsets:
- addresses:
- ip: 192.168.1.25
ports:
- port: 1521
And connectionStr is
"User ID=test;Password=pwd;Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mydb)(PORT=1521)))(CONNECT_DATA=(SID=ORCLCDB)));";
Here is app_deloyment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: apptest
spec:
selector:
matchLabels:
run: web
replicas: 1
template:
metadata:
labels:
run: web
spec:
containers:
- name: app-web
image: app-web
imagePullPolicy: IfNotPresent
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Staging"
volumeMounts:
- name: app-web-log
mountPath: /app/Log
volumes:
- name: app-web-log
hostPath:
path: /log
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: web-svc
namespace: apptest
labels:
run: web
spec:
type: NodePort
ports:
- name: web
protocol: TCP
port: 80
targetPort: 80
nodePort: 30200
selector:
run: web
The output when I run cmd kubectl logs core-dns-66bff467f8-htn5w -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:37732->8.8.8.8:53: read: no route to host
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:41292->8.8.8.8:53: read: no route to host
[ERROR] plugin/errors: 2 4425317009050045698.2183862687326411378. HINFO: read udp 10.244.0.8:36947->8.8.8.8:53: read: no route to host

How to rewrite in Istio the right way and avoid 404 error?

Scenario-
I have 2 deployments deployment-1 with label- version:v1 and deployment-2 with label- version:v2 both are hosted under a nodeport service- test-1. I have created a virtual service with two match conditions as follows
- match:
- uri:
exact: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
exact: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
The code file can be found here
Problem-
When I try to visit this Ingress Gateway IP at http://ingress-gateway-ip/v1/favicon.ico, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico not found (because this has been re-written to "/") also the stylings and js are absent at this route. But when I try to visit
http://ingress-gateway-ip/v1/favicon.ico I can see the favicon icon along with all the js and stylings.
Please find the screenshots of the problem here
Expectation-
How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up?
EDIT-1:
Added code snippet from the original code
Added code file link
EDIT-2:
Added screenshot of the problem
Modified problem statement for clear understanding
How can I access these two services using a prefix routing in the url, meaning when I navigate to /v1, only the V1 version should come up without 404, and when I navigate to /v2, only the V2 version should come up
I assume your issue is your DestinationRule, in the v2 name your label is version: v1 and it should be version: v2, that's why your requests from /v1 and /v2 went only to v1 version of your pod.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v1 <---
It should be
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
When I try to visit this Ingress Gateway IP, I encounter a 404 error in the console saying http://ingress-gateway-ip/favicon.ico
It's working as designed, you haven't specified path for /, just for /v1 and /v2.
If you want to be able to access then you would have to add another match for /
- match:
- uri:
prefix: /
route:
- destination:
host: test-1
There is working example with 2 nginx pods, take a look.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
selector:
matchLabels:
version: v1
replicas: 1
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v2
spec:
selector:
matchLabels:
version: v2
replicas: 1
template:
metadata:
labels:
app: frontend
version: v2
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: test-1
labels:
app: frontend
spec:
ports:
- name: http-front
port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: simpleexample
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-virtualservice
spec:
gateways:
- simpleexample
hosts:
- '*'
http:
- match:
- uri:
prefix: /v1
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v1
- match:
- uri:
prefix: /v2
rewrite:
uri: /
route:
- destination:
host: test-1
port:
number: 80
subset: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test-destinationrule
spec:
host: test-1
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Results from curl:
curl -v ingress-gateway-ip/
404 Not Found
there is no path specified for that in virtual service
curl -v ingress-gateway-ip/v1
HTTP/1.1 200 OK
Hello nginx1
curl -v ingress-gateway-ip/v2
HTTP/1.1 200 OK
Hello nginx2
EDIT
the problem is that all the stylings and js are not readable by the browser at "/" when they are being re-written
It was already explained by #Rinor here
I would add this Istio in practise tutorial here, it explains well a way of dealing with that problem, which is to add more paths for your dependencies(js,css,etc).
Let’s break down the requests that should be routed to Frontend:
Exact path / should be routed to Frontend to get the Index.html
Prefix path /static/* should be routed to Frontend to get any static files needed by the frontend, like Cascading Style Sheets and JavaScript files.
Paths matching the regex ^.*.(ico|png|jpg)$ should be routed to Frontend as it is an image, that the page needs to show.
http:
- match:
- uri:
exact: /
- uri:
exact: /callback
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: frontend
port:
number: 80
Let me know if you have any more questions.

How to access StatefulSet with other PODs

I have a running Elasticsearch STS with a headless Service assigned to it:
svc.yaml:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
stateful.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: elasticsearch-namespace
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-persistent-storage
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: elasticsearch-storageclass
resources:
requests:
storage: 20Gi
The question is: how to access this STS with PODs of Deployment Kind? Let's say, using this Redis POD:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-ws-app
labels:
app: redis-ws-app
spec:
replicas: 1
selector:
matchLabels:
app: redis-ws-app
template:
metadata:
labels:
app: redis-ws-app
spec:
containers:
- name: redis-ws-app
image: redis:latest
command: [ "redis-server"]
ports:
- containerPort: 6379
I have been trying to create another service, that would enable me to access it from the outside, but without any luck:
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-tcp
namespace: elasticsearch-namespace
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
targetPort: 9200
You would reach it directly hitting the headless service. As an example, this StatefulSet and this Service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
kind: Service
apiVersion: v1
metadata:
name: nginx-headless
spec:
selector:
app: nginx
clusterIP: None
ports:
- port: 80
name: http
I could reach the pods of the statefulset, through the headless service from any pod within the cluster:
/ # curl -I nginx-headless
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Tue, 09 Jun 2020 12:36:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5ecd2f04-264"
Accept-Ranges: bytes
The singularity of the headless service, is that it doesn't create iptable rules for that service. So, when you query that service, it goes to kube-dns (or CoreDNS), and it returns the backends, rather then the IP address is the service itself. So, if you do nslookup, for example, it will return all the backends (pods) of that service:
/ # nslookup nginx-headless
Name: nginx-headless
Address 1: 10.56.1.44
Address 2: 10.56.1.45
Address 3: 10.56.1.46
Address 4: 10.56.1.47
And it won't have any iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx-headless
$
Unlike a normal service, that would return the IP address of the service itself:
/ # nslookup nginx
Name: nginx
Address 1: 10.60.15.30 nginx.default.svc.cluster.local
And it will have iptable rules assigned to it:
$ sudo iptables-save | grep -i nginx
-A KUBE-SERVICES ! -s 10.56.0.0/14 -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.60.15.30/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
User #suren was right about the headless service. In my case, I was just using a wrong reference.
The Kube-DNS naming convention is
service.namespace.svc.cluster-domain.tld
and the default cluster domain is cluster.local
In my case, the in order to reach the pod, one has to use:
curl -I elasticsearch.elasticsearch-namespace

Resources