I'm using ECK 1.5.0 and I have to use Ingress to expose Elasticsearch. But I'm getting a 502 gateway when I go to the url (http://my-db-url.com). I have confirmed the database is running fine and able to collect / display data.
I was only able to find solutions to exposing Kibana with Ingress on the web but those were not working for me.
Heres my elasticsearch.yaml (contains Elasticsearch object and Ingress object):
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: my-db
spec:
version: 7.12.0
volumeClaimDeletePolicy: DeleteOnScaledownOnly
nodeSets:
- name: default
count: 3
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: longhorn
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-db-ingress
namespace: my-namespace
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
nginx.ingress.kubernetes.io/upstream-vhost: "$http_host"
spec:
rules:
- host: my-db-url.com
http:
paths:
- backend:
serviceName: my-db-es-http
servicePort: 9200
Turns out the same question was asked here: https://discuss.elastic.co/t/received-plaintext-http-traffic-on-an-https-channel-closing-connection/271380
and the solution is to force HTTPS using annotations, which for the nginx ingress controller can be found here: https://github.com/elastic/helm-charts/issues/779#issuecomment-781431675
Related
I have been trying to create a statefulset of service-registry (eureka-server) in a springboot application. The reason i am doing this because i want to attach pre-defined name to the service-registry pod so that its able to communicate with all the eureka clients even after it restarts. Even though i have been able to create the services (headless and nodeport) with the configuration, but it doesn't create the pod/deployment and the PersistentVolumeClaim itself. Please check the below deployment yaml and suggest the changes.
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
and below is the application.yml file
server:
port: 8761
eureka:
instance:
hostname: "${HOSTNAME}.eureka"
client:
register-with-eureka: false
fetch-registry: false
serviceUrl:
defaultZone: ${EUREKA_SERVER_ADDRESS}
this is how the eureka client apps are referring to eureka server
eureka:
instance:
preferIpAddress: true
hostname: eureka-0
I am new to Kubernetes, so please suggest the changes.
Configuration after adding the PesistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: rtb
name: my-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /run/desktop/mnt/host/c/Users/User/Documents/kubernetesbkp
---
# Define a 'Persistent Volume Claim'(PVC) for Storage, dynamically provisioned by cluster
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: rtb
name: service-registry-pv-claim # name of PVC essential for identifying the storage data
labels:
app: eureka
spec:
accessModes:
- ReadWriteOnce #This specifies the mode of the claim that we are trying to create.
resources:
requests:
storage: 1Gi #This will tell kubernetes about the amount of space we are trying to claim.
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: rtb
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: rtb
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
volumeMounts: # Mounting volume obtained from Persistent Volume Claim
- name: service-registry-persistent-storage
mountPath: /var/lib/eureka #This is the path in the container on which the mounting will take place.
volumes:
- name: service-registry-persistent-storage # Obtaining 'volume' from PVC
persistentVolumeClaim:
claimName: service-registry-pv-claim
---
apiVersion: v1
kind: Service
metadata:
namespace: rtb
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
If you are using kubernetes locally then first you have to create a PersistentVolume. Only then you can use the PersistentVolumeClaim to retrive the storage from the PV you created. Otherwise your PVC claim will be in a pending state. Because without PV the PersistentVolumeClaim did not know that from where it needs to pick up the volume.
So try creating the PersistentVolume like this
PersistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostpath:
path: /tmp/ #Path where you want to allocate your PV in local
Now you can create the PVC and statefulset as per your requirement.
NOTE: Make Sure the PV storage is always greater or equal to the claim.
If you are using the docker dekstop kubernetes then hostpath will be different than mentioned above, refer this SO for it.
For more detailed information. Refer these links link1 link2
This is the below deployment configuration which worked for me, but just one thing, i applied all of the configurations one by one, otherwise it doesn't work on a single apply and the statefulset isn't created then. Also, i am mentioning it as a single configuration file here for convenience.
Would be helpful if someone can point out why it doesn't work in a single apply command. Thanks!
apiVersion: v1
kind: ConfigMap
metadata:
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
---
apiVersion: v1
kind: Service
metadata:
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
If possible-
I have a question:
this is my kibana:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.6.2
count: 1
elasticsearchRef:
name: cdbridgerpayelasticsearch
http:
service:
spec:
type: LoadBalancer
the kibana ran well (the LB too)
and this is my Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
cert-manager.io/cluster-issuer: "letsencrypt-http"
name: bcd-ingress-kibana-bcd
spec:
rules:
- host: kibana.some.net
http:
paths:
- backend:
serviceName: quickstart
servicePort: 5601
path: /
tls:
- hosts:
- kibana.some.net
when I ran: kubectl get ingress
I got:
$ kubectl get ingress
W0614 15:48:48.425600 1675 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME CLASS HOSTS ADDRESS PORTS AGE
kibana-ingress-bcd kibana.some.net 80 5m24s
and when I tried browsing with this host-
the browsre didnt recognize this hostname.
If anyone knows what the problem is - it will help me a lot×¥
thanks
Frida
I have deployed elastic APM server into kubernetes and was trying to expose it through nginx ingress controller. Following is my configuration:
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: apm-server-config
labels:
k8s-app: apm-server
data:
apm-server.yml: |-
apm-server:
host: "0.0.0.0:8200"
setup.kibana:
enabled: "true"
host: "kibana:5601"
output.elasticsearch:
hosts: ["elastic:9200"]
---
#Deployment Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: apm-server
env: msprod
state: common
name: apm-server
namespace: elastic
spec:
replicas: 1
minReadySeconds: 10
selector:
matchLabels:
app: apm-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: apm-server
spec:
containers:
- image: docker.elastic.co/apm/apm-server:7.12.1
imagePullPolicy: Always
env:
- name: output.elasticsearch.hosts
value: "http://elastic:9200"
name: apm-server
ports:
- name: liveness-port
containerPort: 8200
volumeMounts:
- name: apm-server-config
mountPath: /usr/share/apm-server/apm-server.yml
readOnly: true
subPath: apm-server.yml
resources:
limits:
cpu: 250m
memory: 1024Mi
requests:
cpu: 100m
memory: 250Mi
volumes:
- name: apm-server-config
configMap:
name: apm-server-config
nodeSelector:
env: prod
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
#Service Configuration
apiVersion: v1
kind: Service
metadata:
labels:
app: apm-server
name: apm-server
namespace: elastic
spec:
ports:
- port: 8200
targetPort: 8200
name: http
nodePort: 31000
selector:
app: apm-server
sessionAffinity: None
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm
backend:
serviceName: apm-server
servicePort: 8200
The pod is running and I am able to hit APM server using kubectl port-forward.
But when I am accessing the apm server with https://my.domain.com/apm then I am getting page not found error in browser and following error in APM pod:
{"log.level":"error","#timestamp":"2021-10-21T06:22:00.198Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":60},"message":"404 page not found","url.original":"/apm","http.request.method":"GET","user_agent.original":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36","source.address":"10.148.7.7","http.request.body.bytes":0,"http.request.id":"9294124a-5356-4b2c-ba8e-c0a589b23571","event.duration":110881,"http.response.status_code":404,"error.message":"404 page not found","ecs.version":"1.6.0"}
The error is coming because there is no context path configured in APM. I have gone through the APM documentation and couldn't find a way to configure context path in the apm server. Please help.
Posting this as answer out of comments.
Initial ingress rule passes the same path /apm to the APM service, which is confirmed by error in APM pod's logs - "message":"404 page not found","url.original":"/apm"
To fix it, nginx ingress has rewrite annotation. The way it works is described in the link with example.
Final ingress.yaml should look like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2 # adding captured group
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm(/|$)(.*) # to have captured group works correctly
backend:
serviceName: apm-server
servicePort: 8200
What happens here is requests sent to my.domain.com/apm goes to the service on / path.
Captured group allows to preserve correct paths, for instance if the request goes to my.domain.com/apm/something, ingress will translate it to /something which will be passed to the service.
I am looking to deploy a SaaS platform I have built on kubernetes but I have hit a barrier when it comes to setting up the deployment correctly. You can find the setup I am going for
here. I have tried setting up the ingress controller, and it is, in fact forwarding requests to the GraphQL gateway, the problem is that the GraphQL Gateway itself is uncapable of connecting to the other services due to gRPC connection errors. Below you will find my Deployment.yml which contains the graphql gateway, an ingress controller and one service called general
What I am asking basically is how can I connect my gRPC based services to the main GraphQL gateway
apiVersion: v1
kind: Service
metadata:
name: graphql-gateway
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: gateway
---
apiVersion: v1
kind: Service
metadata:
name: general-service
spec:
ports:
- protocol: TCP
port: 443
targetPort: 600
selector:
app: general
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
spec:
replicas: 2
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: myregistry/gateway:latest
imagePullPolicy: "Always"
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: general
spec:
replicas: 2
selector:
matchLabels:
app: general
template:
metadata:
labels:
app: general
spec:
containers:
- name: general
image: myregistry/general:latest
imagePullPolicy: "Always"
ports:
- containerPort: 600
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/grpc-backend: "true"
spec:
rules:
- host: "playground.mydomain.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: graphql-gateway
port:
number: 80
I installed NGINX Ingress in kubernetes cluster. When i am trying to access the micro service end via Ingress Controller its not working as expected
I have deployed two spring boot application
Ingress Rules
Path 1 -> /customer
Path 2 -> /prac
When i am trying to access one of the service ex.
http://test.practice.com/prac/practice/getprac , it does not work
but when i try to access without Ingress path http://test.practice.com/practice/getprac, it works
I am not able to understand why with Ingress path its not working and same happens for other service
Micro service 1 (Port 9090)
apiVersion: apps/v1
kind: Deployment
metadata:
name: customer
namespace: practice
labels:
app: customer
spec:
replicas: 5
selector:
matchLabels:
app: customer
template:
metadata:
labels:
app: customer
spec:
imagePullSecrets:
- name: testkuldeepsecret
containers:
- name: customer
image: kuldeep99/customer:v1
ports:
- containerPort: 9090
hostPort: 9090
---
apiVersion: v1
kind: Service
metadata:
name: customer-service
namespace: practice
labels:
spec:
ports:
- port: 9090
targetPort: 9090
protocol: TCP
name: http
selector:
app: customer
Micro service 2 (port 8000)
apiVersion: apps/v1
kind: Deployment
metadata:
name: prac
namespace: practice
labels:
app: prac
spec:
replicas: 4
selector:
matchLabels:
app: prac
template:
metadata:
labels:
app: prac
spec:
imagePullSecrets:
- name: testkuldeepsecret
containers:
- name: prac
image: kuldeep99/practice:v1
ports:
- containerPort: 8000
hostPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: prac-service
namespace: practice
labels:
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: prac
Service (customer-service and prac-service)
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
customer-service ClusterIP 10.97.203.19 <none> 9090/TCP 39m
ngtest ClusterIP 10.98.74.149 <none> 80/TCP 21h
prac-service ClusterIP 10.96.164.210 <none> 8000/TCP 15m
some-mysql ClusterIP None <none> 3306/TCP 2d16h
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: practice-ingress
namespace: practice
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: practice.example.com
http:
paths:
- backend:
serviceName: customer-service
servicePort: 9090
path: /customer
- backend:
serviceName: prac-service
servicePort: 8000
path: /prac
You have installed this nginx ingress
nginx.ingress.kubernetes.io/rewrite-target: / annotation to work properly you need to install this nginx ingress.
Alternative way to solve this issue is to configure contextPath to /prac in the spring application
On top the discussion, i observed one thing. We should not confuse with
apiVersion: networking.k8s.io/v1
kind: Ingress
And
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
First ensure which Ingress controller we are using and based on that decide apiVersion. I'm using "ingress-nginx" (not "nginx-ingress"). This one supports "apiVersion: networking.k8s.io/v1beta1" and works charm as per "Arsene" comment.
This Ingress yaml file WORKS with "ingress-nginx" Ingress controller
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: k8-exercise-03-two-app-ingress
spec:
rules:
- host: ex03.k8.sb.two.app.ingress.com
http:
paths:
- backend:
serviceName: k8-excercise-01-app-service
servicePort: 8080
path: /one(/|$)(.*)
- backend:
serviceName: k8-exercise-03-ms-service
servicePort: 8081
path: /two(/|$)(.*)
But, this Ingress yaml file NOT WORKING with "ingress-nginx" Ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8-exercise-03-two-app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
# nginx.ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/rewrite-target: /$2
spec:
# ingressClassName: nginx
rules:
#192.168.1.5 ex03.k8.sb.com is mapped in host file. 192.168.1.5 is Host machine IP
- host: ex03.k8.sb.two.app.ingress.com
http:
paths:
- backend:
service:
name: k8-excercise-01-app-service
port:
number: 8080
path: /one(/|$)(.*)
pathType: Prefix
- pathType: Prefix
path: /two(/|$)(.*)
backend:
service:
name: k8-exercise-03-ms-service
port:
number: 8081
I can access the Spring Boot API Calls as like:
For App-1:
http://ex03.k8.sb.two.app.ingress.com/one/
Result: App One - Root
http://ex03.k8.sb.two.app.ingress.com/one/one
Result: App One - One API
http://ex03.k8.sb.two.app.ingress.com/one/api/v1/hello
Result: App One - Hello API
App-2:
http://ex03.k8.sb.two.app.ingress.com/two/message/James%20Bond
Result: App Two- Hi James Bond API
Finally If any one knows how to change "apiVersion: networking.k8s.io/v1" yaml to support "ingress-nginx" Controller, will be appreciate. Thank you. Sorry for long content
I spend literally a day with this problem. The problem was simply the wrong nginx installed. I used helm found here to install nginx-ingress
Install it, please use helm version 3:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
Once run, in the logs you shall see a snippet that illustrates how your ingress should look like. In case you want to do the above, you can the annotation suggested above and henceforth, you can follow tutorials here to achieve more such as rewrite.
My cluster is deployed on GCP using GKE
when done, this is the output log:
NAME: ingress-nginx
LAST DEPLOYED: Sat Apr 24 07:56:11 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-nginx-controller'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
This is how it looks like now after installing it:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: example
# namespace: foo
spec:
rules:
- host: [your ip address].sslip.io
http:
paths:
- backend:
serviceName: registry-app-server
servicePort: 8761
path: /eureka/(.*)
- backend:
serviceName: api-gateway-server
servicePort: 7000
path: /api(/|$)(.*)
As you can see I am deploying spring micro-services using kubernetes(gke).
There are a lot of benefits of using nginx-ingress over built-in gke ingress, and it is more popular than its counterparts