How to get Kubernetes Ingress to terminate SSL and proxy to service? - https

I have a centos7 deployment with kubernetes on bare metal. Everything works great. However, i would like to get an Ingress working. so in brief what i want to do is to terminate the SSL from within the Ingress and have plain http between the ingress and my service. this is what i did:
1) i hack weave to allow hostNetwork
2) i have an ingress controller set up as per:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
k8s-app: nginx-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
hostNetwork: true
terminationGracePeriodSeconds: 60
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
role: edge-router
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --enable-ssl-passthrough
# - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
volumeMounts:
- name: tls-dhparam-vol
mountPath: /etc/nginx-ssl/dhparam
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
Note the DaemonSet and the nodeSelector. Also the hostNetwork = true so that my kubernetes nodes will open up 80 and 443 to listen for routing).
So i attempt to go to http://foo.bar.com and unsurprisingly, nothing. i just get the default backend - 404 page. i need the ingress rule....
3) so i create a Ingress rule like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hub
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.org/ssl-services: "hub"
spec:
tls:
- hosts:
- foo.bar.com
secretName: tls-dhparam
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: hub
servicePort: 8000
So it works great!... for http... when i go to my node at http://foo.bar.com i can access my service (hub) and can log on. however, as one has to log on it only makes sense to enforce https....
so my problem is that when i switch my browser over to https://foo.bar.com, i end up with a the default backend - 404 page.
looking at the cert presented in the above, i see that it is one created by kubernetes:
Kubernetes Ingress Controller Fake Certificate
Self-signed root certificate
checking my secrets:
$ kubectl -n ingress-nginx get secrets
NAME TYPE DATA AGE
default-token-kkd2j kubernetes.io/service-account-token 3 12m
nginx-ingress-serviceaccount-token-7f2sq kubernetes.io/service-account-token 3 12m
tls-dhparam Opaque 1 8m
what am i doing wrong?

The issue was that using a pem file didn't seem to work (and there was no noticeable error associated with it).
It worked when I switched over to a TLS cert/key via:
kubectl create secret tls tls-certificate --key my.key --cert my.cer

In your example, it looks like your Ingress doesn't explicitly declare metadata.namespace. If it is ending up in the default namespace while the tls-dhparam Secret is in the ingress-nginx namespace that would be the problem. The tls secrets for Ingresses should be in the same namespace as the Ingress.

Related

Expose Elastic APM through Ingress Controller

I have deployed elastic APM server into kubernetes and was trying to expose it through nginx ingress controller. Following is my configuration:
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: elastic
name: apm-server-config
labels:
k8s-app: apm-server
data:
apm-server.yml: |-
apm-server:
host: "0.0.0.0:8200"
setup.kibana:
enabled: "true"
host: "kibana:5601"
output.elasticsearch:
hosts: ["elastic:9200"]
---
#Deployment Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: apm-server
env: msprod
state: common
name: apm-server
namespace: elastic
spec:
replicas: 1
minReadySeconds: 10
selector:
matchLabels:
app: apm-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: apm-server
spec:
containers:
- image: docker.elastic.co/apm/apm-server:7.12.1
imagePullPolicy: Always
env:
- name: output.elasticsearch.hosts
value: "http://elastic:9200"
name: apm-server
ports:
- name: liveness-port
containerPort: 8200
volumeMounts:
- name: apm-server-config
mountPath: /usr/share/apm-server/apm-server.yml
readOnly: true
subPath: apm-server.yml
resources:
limits:
cpu: 250m
memory: 1024Mi
requests:
cpu: 100m
memory: 250Mi
volumes:
- name: apm-server-config
configMap:
name: apm-server-config
nodeSelector:
env: prod
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
#Service Configuration
apiVersion: v1
kind: Service
metadata:
labels:
app: apm-server
name: apm-server
namespace: elastic
spec:
ports:
- port: 8200
targetPort: 8200
name: http
nodePort: 31000
selector:
app: apm-server
sessionAffinity: None
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm
backend:
serviceName: apm-server
servicePort: 8200
The pod is running and I am able to hit APM server using kubectl port-forward.
But when I am accessing the apm server with https://my.domain.com/apm then I am getting page not found error in browser and following error in APM pod:
{"log.level":"error","#timestamp":"2021-10-21T06:22:00.198Z","log.logger":"request","log.origin":{"file.name":"middleware/log_middleware.go","file.line":60},"message":"404 page not found","url.original":"/apm","http.request.method":"GET","user_agent.original":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36","source.address":"10.148.7.7","http.request.body.bytes":0,"http.request.id":"9294124a-5356-4b2c-ba8e-c0a589b23571","event.duration":110881,"http.response.status_code":404,"error.message":"404 page not found","ecs.version":"1.6.0"}
The error is coming because there is no context path configured in APM. I have gone through the APM documentation and couldn't find a way to configure context path in the apm server. Please help.
Posting this as answer out of comments.
Initial ingress rule passes the same path /apm to the APM service, which is confirmed by error in APM pod's logs - "message":"404 page not found","url.original":"/apm"
To fix it, nginx ingress has rewrite annotation. The way it works is described in the link with example.
Final ingress.yaml should look like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: elastic
name: gateway-ingress-apm
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2 # adding captured group
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /apm(/|$)(.*) # to have captured group works correctly
backend:
serviceName: apm-server
servicePort: 8200
What happens here is requests sent to my.domain.com/apm goes to the service on / path.
Captured group allows to preserve correct paths, for instance if the request goes to my.domain.com/apm/something, ingress will translate it to /something which will be passed to the service.

Kubernetes Ingress path based routing not working as expected

I installed NGINX Ingress in kubernetes cluster. When i am trying to access the micro service end via Ingress Controller its not working as expected
I have deployed two spring boot application
Ingress Rules
Path 1 -> /customer
Path 2 -> /prac
When i am trying to access one of the service ex.
http://test.practice.com/prac/practice/getprac , it does not work
but when i try to access without Ingress path http://test.practice.com/practice/getprac, it works
I am not able to understand why with Ingress path its not working and same happens for other service
Micro service 1 (Port 9090)
apiVersion: apps/v1
kind: Deployment
metadata:
name: customer
namespace: practice
labels:
app: customer
spec:
replicas: 5
selector:
matchLabels:
app: customer
template:
metadata:
labels:
app: customer
spec:
imagePullSecrets:
- name: testkuldeepsecret
containers:
- name: customer
image: kuldeep99/customer:v1
ports:
- containerPort: 9090
hostPort: 9090
---
apiVersion: v1
kind: Service
metadata:
name: customer-service
namespace: practice
labels:
spec:
ports:
- port: 9090
targetPort: 9090
protocol: TCP
name: http
selector:
app: customer
Micro service 2 (port 8000)
apiVersion: apps/v1
kind: Deployment
metadata:
name: prac
namespace: practice
labels:
app: prac
spec:
replicas: 4
selector:
matchLabels:
app: prac
template:
metadata:
labels:
app: prac
spec:
imagePullSecrets:
- name: testkuldeepsecret
containers:
- name: prac
image: kuldeep99/practice:v1
ports:
- containerPort: 8000
hostPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: prac-service
namespace: practice
labels:
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: prac
Service (customer-service and prac-service)
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
customer-service ClusterIP 10.97.203.19 <none> 9090/TCP 39m
ngtest ClusterIP 10.98.74.149 <none> 80/TCP 21h
prac-service ClusterIP 10.96.164.210 <none> 8000/TCP 15m
some-mysql ClusterIP None <none> 3306/TCP 2d16h
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: practice-ingress
namespace: practice
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: practice.example.com
http:
paths:
- backend:
serviceName: customer-service
servicePort: 9090
path: /customer
- backend:
serviceName: prac-service
servicePort: 8000
path: /prac
You have installed this nginx ingress
nginx.ingress.kubernetes.io/rewrite-target: / annotation to work properly you need to install this nginx ingress.
Alternative way to solve this issue is to configure contextPath to /prac in the spring application
On top the discussion, i observed one thing. We should not confuse with
apiVersion: networking.k8s.io/v1
kind: Ingress
And
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
First ensure which Ingress controller we are using and based on that decide apiVersion. I'm using "ingress-nginx" (not "nginx-ingress"). This one supports "apiVersion: networking.k8s.io/v1beta1" and works charm as per "Arsene" comment.
This Ingress yaml file WORKS with "ingress-nginx" Ingress controller
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: k8-exercise-03-two-app-ingress
spec:
rules:
- host: ex03.k8.sb.two.app.ingress.com
http:
paths:
- backend:
serviceName: k8-excercise-01-app-service
servicePort: 8080
path: /one(/|$)(.*)
- backend:
serviceName: k8-exercise-03-ms-service
servicePort: 8081
path: /two(/|$)(.*)
But, this Ingress yaml file NOT WORKING with "ingress-nginx" Ingress controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8-exercise-03-two-app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
# nginx.ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/rewrite-target: /$2
spec:
# ingressClassName: nginx
rules:
#192.168.1.5 ex03.k8.sb.com is mapped in host file. 192.168.1.5 is Host machine IP
- host: ex03.k8.sb.two.app.ingress.com
http:
paths:
- backend:
service:
name: k8-excercise-01-app-service
port:
number: 8080
path: /one(/|$)(.*)
pathType: Prefix
- pathType: Prefix
path: /two(/|$)(.*)
backend:
service:
name: k8-exercise-03-ms-service
port:
number: 8081
I can access the Spring Boot API Calls as like:
For App-1:
http://ex03.k8.sb.two.app.ingress.com/one/
Result: App One - Root
http://ex03.k8.sb.two.app.ingress.com/one/one
Result: App One - One API
http://ex03.k8.sb.two.app.ingress.com/one/api/v1/hello
Result: App One - Hello API
App-2:
http://ex03.k8.sb.two.app.ingress.com/two/message/James%20Bond
Result: App Two- Hi James Bond API
Finally If any one knows how to change "apiVersion: networking.k8s.io/v1" yaml to support "ingress-nginx" Controller, will be appreciate. Thank you. Sorry for long content
I spend literally a day with this problem. The problem was simply the wrong nginx installed. I used helm found here to install nginx-ingress
Install it, please use helm version 3:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
Once run, in the logs you shall see a snippet that illustrates how your ingress should look like. In case you want to do the above, you can the annotation suggested above and henceforth, you can follow tutorials here to achieve more such as rewrite.
My cluster is deployed on GCP using GKE
when done, this is the output log:
NAME: ingress-nginx
LAST DEPLOYED: Sat Apr 24 07:56:11 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-nginx-controller'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
This is how it looks like now after installing it:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: example
# namespace: foo
spec:
rules:
- host: [your ip address].sslip.io
http:
paths:
- backend:
serviceName: registry-app-server
servicePort: 8761
path: /eureka/(.*)
- backend:
serviceName: api-gateway-server
servicePort: 7000
path: /api(/|$)(.*)
As you can see I am deploying spring micro-services using kubernetes(gke).
There are a lot of benefits of using nginx-ingress over built-in gke ingress, and it is more popular than its counterparts

Exposing a pod to local enviroment for easy developing purposes?

I am curently working with kubernetes, and trying to make a development process making it capable for developer to access services within a local kubernetes cluster? I would like it to keep it simple, and have for now tried kubectl port-forward kafka 10000:9092 but this didn't seem to expose the pod to localhost:10000..
I've tried converting the kafka service to a nodeport, still with no luck - only way I could access it was by creating my application as dockerized application, and run the application in a docker container
- meaning that running the exe file would not connect to it, but executing it using docker would make it work.
I've tried Kubectl proxy - which doesn't work either - I am not able to ping the clusterIp.
I have not tried with ingress or loadbalancer, as I find it a bit too elaborate, considering that this is only for developing purposes, and not something that should be production "secure"..
How do i easily expose the kafka service such that a console application on my laptop running kubernetes cluster locally can access it?
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
clusterIP: None
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
template:
metadata:
labels:
app: kafka
spec:
containers:
- command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092 && \
exec /etc/confluent/docker/run
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: KAFKA_HEAP_OPTS
value: -Xmx1G -Xms1G
- name: KAFKA_ZOOKEEPER_CONNECT
value: leader-zookeeper:2181
- name: KAFKA_LOG_DIRS
value: /opt/kafka/data/logs
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
image: confluentinc/cp-kafka:latest
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -ec
- /usr/bin/jps | /bin/grep -q SupportedKafka
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kafka-broker
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
ports:
- containerPort: 9092
name: kafka
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/kafka/data
name: datadir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: kafka-pdb
name: kafka-pdb
spec:
maxUnavailable: 0
selector:
matchLabels:
app: kafka
---
To port forward to a service, you need to use svc/ infront of the name. So your command would be either kubectl port-forward svc/kafka 10000:9092 or kubectl port-forward kafka-0 10000:9092
On Windows, make sure windows firewall is not blocking kubectl.
Reference:
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
You can also use telepresence to debug a existing service on cluster by swapping it local development version.
Install telepresence and use telepresence --swap-deployment $DEPLOYMENT_NAME
Reference:
https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/#developing-or-debugging-an-existing-service
https://www.telepresence.io/reference/install
If I understand you correctly I have some additional options for you to check:
This answer uses an idea of externalTrafficPolicy: Local alongside other possible solutions.
I see from the comments that you are using Docker Desktop for Windows. You can try to use type: LoadBalancer service instead of ClusterIP or NodePort. I know it may sound wiered but I have seen few examples like this one showing that it actually works.
I am posting this as a community answer because the proposed solutions were not originally my ideas.
Please let me know if that helped.

nginx-ingress - https configuration - server IP address could not be found

I want to enable https for my web app, hosted in GKE. I have a domain name, arindam.fr and DNS name is mentioned in Cloud DNS, and got NS for Type A.
I am getting error:
This site can’t be reached arindam.fr’s server IP address could not be found.
when accessing page: https://arindam.fr/
https://github.com/arindam-b/DNSissue/blob/master/3.png
https://github.com/arindam-b/DNSissue/blob/master/1.PNG "Cloud DNS"
My Deployment & Service yaml:
My ingress yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- arindam.fr
secretName: tls-staging-cert
rules:
- host: arindam.fr
http:
paths:
- path: /
backend:
serviceName: hello-app
servicePort: 8080
Before that I installed nginx controller and cert manager using helm:
helm install --name nginx-ingress stable/nginx-ingress
Domain's NS are mentioned in my domain registration, in namecheap.com
https://github.com/arindam-b/DNSissue/blob/master/2.PNG "NS Configuration"
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-app
spec:
replicas: 1
template:
metadata:
labels:
app: hello-app
track: stable
spec:
containers:
- name: hello-app
image: "eu.gcr.io/rcup-mza-dev/hello-app:latest"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: hello-app
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-app
# type: LoadBalancer
Am I missing something?
It seems that you registar's configuration is not propagating correctly Google's nameservers, I just check it in the following link. I also found this guide for how to change NS in namecheap, take in mind that you need to select "custom DNS" option to specify Google's NS.
After your registar propagates correctly the nameservers, this could take between 24-72 hours, you will be able to reach your domain.
DNSSEC was turned off, so it was not properly propagating. After turning it on it works fine.

deploy Laravel in kubernetes

I'm trying to deploy a laravel application in kubernetes at Google Cloud Platform.
I followed couple of tutorials and was successful trying them locally on a docker VM.
https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way
https://blog.cloud66.com/deploying-your-laravel-php-applications-with-cloud-66/
But when tried to deploy in kubernetes using an ingress to assign a domain name to the application. I keep getting the 502 bad gateway page.
I'm using a nginx ingress controller with image k8s.gcr.io/nginx-ingress-controller:0.8.3 and my ingress is as following
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- domainname.com
secretName: sslcertificate
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: service
servicePort: 80
path: /
this is my application service
apiVersion: v1
kind: Service
metadata:
name: service
labels:
name: demo
version: v1
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
name: demo
type: NodePort
this is my ingress controller
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
and here is my laravel application deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo-rc
labels:
name: demo
version: v1
spec:
strategy:
type: Recreate
template:
metadata:
labels:
name: demo
version: v1
spec:
containers:
- image: gcr.io/projectname/laravelapp:vx
name: app-pod
ports:
- containerPort: 8080
I tried to add the domain entry to the hosts file but with no luck !!
is there a specific configurations I have to add to the configmap.yaml file for the nginx ingress controller?
In short, to be able to reach your application via external domain name (singapore.smartlabplatform.com), you need to create a A DNS record for GCP L4 Load Balancer's external IP address (this is in other words EXTERNAL-IP of your default nginx-ingress-controller's Service), here seen as pending:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP
nginx-ingress-controller LoadBalancer 10.7.248.226 pending
nginx-ingress-default-backend ClusterIP 10.7.245.75 none
how to do this? it's explained on the GKE tutorials page here.
In the current state of your environment you can only reach your application in two ways:
From outside, via Load Balancer EXTERNAL-IP:
From inside, your Kubernetes cluster using laravel-kubernetes-demo service dns name:
$ curl laravel-kubernetes-demo.default.svc.cluster.local
<title>Laravel Kubernetes Demo :: LearnK8s</title>
If you want all that magic, like the automatic creation of DNS records, happen along with appearance of host: domain.com in your ingress resource spec, you should use external-dns (makes Kubernetes resources discoverable via public DNS servers), and here is the tutorial on how to set it up specifically for GKE.

Resources