Exposing a pod to local enviroment for easy developing purposes? - visual-studio

I am curently working with kubernetes, and trying to make a development process making it capable for developer to access services within a local kubernetes cluster? I would like it to keep it simple, and have for now tried kubectl port-forward kafka 10000:9092 but this didn't seem to expose the pod to localhost:10000..
I've tried converting the kafka service to a nodeport, still with no luck - only way I could access it was by creating my application as dockerized application, and run the application in a docker container
- meaning that running the exe file would not connect to it, but executing it using docker would make it work.
I've tried Kubectl proxy - which doesn't work either - I am not able to ping the clusterIp.
I have not tried with ingress or loadbalancer, as I find it a bit too elaborate, considering that this is only for developing purposes, and not something that should be production "secure"..
How do i easily expose the kafka service such that a console application on my laptop running kubernetes cluster locally can access it?
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
clusterIP: None
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
template:
metadata:
labels:
app: kafka
spec:
containers:
- command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092 && \
exec /etc/confluent/docker/run
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: KAFKA_HEAP_OPTS
value: -Xmx1G -Xms1G
- name: KAFKA_ZOOKEEPER_CONNECT
value: leader-zookeeper:2181
- name: KAFKA_LOG_DIRS
value: /opt/kafka/data/logs
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
image: confluentinc/cp-kafka:latest
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -ec
- /usr/bin/jps | /bin/grep -q SupportedKafka
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kafka-broker
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
ports:
- containerPort: 9092
name: kafka
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/kafka/data
name: datadir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: kafka-pdb
name: kafka-pdb
spec:
maxUnavailable: 0
selector:
matchLabels:
app: kafka
---

To port forward to a service, you need to use svc/ infront of the name. So your command would be either kubectl port-forward svc/kafka 10000:9092 or kubectl port-forward kafka-0 10000:9092
On Windows, make sure windows firewall is not blocking kubectl.
Reference:
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
You can also use telepresence to debug a existing service on cluster by swapping it local development version.
Install telepresence and use telepresence --swap-deployment $DEPLOYMENT_NAME
Reference:
https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/#developing-or-debugging-an-existing-service
https://www.telepresence.io/reference/install

If I understand you correctly I have some additional options for you to check:
This answer uses an idea of externalTrafficPolicy: Local alongside other possible solutions.
I see from the comments that you are using Docker Desktop for Windows. You can try to use type: LoadBalancer service instead of ClusterIP or NodePort. I know it may sound wiered but I have seen few examples like this one showing that it actually works.
I am posting this as a community answer because the proposed solutions were not originally my ideas.
Please let me know if that helped.

Related

Kubernetes spring application in docker connect external service

I'm new in kubernetes and docker world :)
I try to deploy our application in docker in kubernetes, but i can't connect to external mysql database..
my steps:
1, Install kubernetes with kubeadm in our new server.
2, Create a docker image from our application with mvn spring-boot:build-image
3, I create a deployment and service yaml to use image.
Deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
app: demo-app
name: demo-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: demo-app
spec:
containers:
- image: demo/demo-app:0.1.05-SNAPSHOT
imagePullPolicy: IfNotPresent
name: demo-app-service
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://mysqldatabase/DBDEV?serverTimezone=Europe/Budapest&useLegacyDatetimeCode=false
ports:
- containerPort: 4000
volumeMounts:
- name: uploads
mountPath: /uploads
- name: ssl-dir
mountPath: /ssl
volumes:
- name: ssl-dir
hostPath:
path: /var/www/dev.hu/backend/ssl
- name: uploads
hostPath:
path: /var/www/dev.hu/backend/uploads
restartPolicy: Always
Service YAML:
apiVersion: v1
kind: Service
metadata:
labels:
app: demo-app
name: demo-app
namespace: default
spec:
ports:
- port: 4000
name: spring
protocol: TCP
targetPort: 4000
selector:
app: demo-app
sessionAffinity: None
type: LoadBalancer
4, Create an endpoints and Service YAML, to communicate to outside:
kind: Endpoints
apiVersion: v1
metadata:
name: mysqldatabase
subsets:
- addresses:
- ip: 10.10.0.42
ports:
- port: 3306
---
kind: Service
apiVersion: v1
metadata:
name: mysqldatabase
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
But it's not working, when i going to see logs i see spring cant connect to database.
Caused by: java.net.UnknownHostException: mysqldatabase
at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:132)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63)
thanks for any helps
hold on. you don't create endpoints yourself. endpoints are registered by kubernetes when a service has matching pods. right now, you have deployed your application and exposed it via a service.
if you want to connect to your mysql database via service it needs to be deployed and kubernetes as well. if it is not hosted on kubernetes you will need a hostname or the ip address of the database and adapt your SPRING_DATASOURCE_URL accordingly!

Debugging uWSGI in kubernetes

I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.
So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.
How can I debug this?
For reference, here is my nginx location:
location / {
# Trick to avoid nginx aborting at startup (set server in variable)
set $upstream_server ${APP_SERVER};
include uwsgi_params;
uwsgi_pass $upstream_server;
uwsgi_read_timeout 300;
uwsgi_intercept_errors on;
}
Here is my wsgi.ini:
[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true
uid = www-data
gid = www-data
Here is the kubernetes deployment.yaml for nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: nginx
name: nginx
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: nginx
strategy:
type: Recreate
template:
metadata:
labels:
service: nginx
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: nginx
image: <custom image url>
imagePullPolicy: Always
env:
- name: APP_SERVER
valueFrom:
secretKeyRef:
name: my-environment-config
key: APP_SERVER
- name: FK_SERVER_NAME
valueFrom:
secretKeyRef:
name: my-environment-config
key: SERVER_NAME
ports:
- containerPort: 80
- containerPort: 10443
- containerPort: 10090
resources:
requests:
cpu: 1m
memory: 200Mi
volumeMounts:
- mountPath: /etc/letsencrypt
name: my-storage
subPath: nginx
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-nginx
- name: dshm
emptyDir:
medium: Memory
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "nginx-port-80"
port: 80
targetPort: 80
protocol: TCP
- name: "nginx-port-443"
port: 443
targetPort: 10443
protocol: TCP
- name: "nginx-port-10090"
port: 10090
targetPort: 10090
protocol: TCP
selector:
service: nginx
Here is the kubernetes deployment.yaml for python flask:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: my-app
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: my-app
image: <custom image url>
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
cpu: 1m
memory: 100Mi
volumeMounts:
- name: merchbot-storage
mountPath: /app/data
subPath: my-app
- name: dshm
mountPath: /dev/shm
- name: local-config
mountPath: /app/secrets/local_config.json
subPath: merchbot-local-config-test.json
restartPolicy: Always
volumes:
- name: merchbot-storage
persistentVolumeClaim:
claimName: my-storage-claim-app
- name: dshm
emptyDir:
medium: Memory
- name: local-config
secret:
secretName: my-app-local-config
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: my-app
name: my-app
spec:
ports:
- name: "my-app-port-5000"
port: 5000
targetPort: 5000
selector:
service: my-app
Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.
A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.
Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.
Where you have python available you can test with uswgi_curl
pip install uwsgi-tools
uwsgi_curl hostname:port /path
Otherwise nc/curl will suffice, to a point.
Pod to localhost
First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl
kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path
Pod to Pod/Service
Next include the kubernetes networking. Start with IP's and finish with names.
Less likely to have python here, or even nc but I think testing the environment variables is important here:
kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000
echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or
uwsgi_curl $APP_SERVER:5000 /path
Debug Pod to Pod/Service
If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.
In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.
Node to Pod/Service
Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:
nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>
In this case:
nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000

Access Elasticsearch from minikube/kubernetes

I have a spring boot application which is deployed in Kubernetes on local windows machine using minikube. I also have Elasticsearch running on my local machine (http://localhost:9200).
I want to call Elasticsearch REST endpoints from this spring boot app.
I tried solving this by creating a service without selector but not sure what am i missing.
When accessing the spring boot app using http://#minikube_ip#:#Node_Port#, i get an error "No route to host".
i tried doing minikube ssh and executing curl command, from there also i get the same error. Clearly I am missing something here.
application.yaml
elasticsearch:
hosts:
- http://my-es:80
connectTimeout: 10000
connectionRequestTimeout: 10000
socketTimeout: 10000
maxRetryTimeoutMillis: 60000
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-es-app
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kube-es-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kube-es-app
spec:
containers:
- image: elastic-search-app:latest
imagePullPolicy: Never
name: kube-es-app
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
name: my-es
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9200
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-es
subsets:
- addresses:
- ip: <MY_LOCAL_MACHINE_IP>
ports:
- port: 9200
Commands I executed
docker build -t elastic-search-app .
kubectl create -f deployment.yaml
kubectl expose deployment/kube-es-app --type="NodePort" --port 8080
Can anyone help please? I am stuck
If I've got the description right, the Windows machine should have vbox network adapter connected to the Host-only-network the Minikube VM is connected to.
Minikube can access the host machine directly because both are in the same network.
The Minikube is in charge of NAT-ting packages from Pods outside. What you need is to allow Elasticsearch to listen to the vbox- or all interfaces, and enable its port in the Windows firewall. Then the Elasticsearch should be available via IP address of Windows in the Host-only-network.
Apart from that, you might create a service (if you need go by name instead of IP) as discussed here:
Connect to local database from inside minikube cluster,
Minikube:Exposing mysql as a service on localhost.

deploy Laravel in kubernetes

I'm trying to deploy a laravel application in kubernetes at Google Cloud Platform.
I followed couple of tutorials and was successful trying them locally on a docker VM.
https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way
https://blog.cloud66.com/deploying-your-laravel-php-applications-with-cloud-66/
But when tried to deploy in kubernetes using an ingress to assign a domain name to the application. I keep getting the 502 bad gateway page.
I'm using a nginx ingress controller with image k8s.gcr.io/nginx-ingress-controller:0.8.3 and my ingress is as following
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- domainname.com
secretName: sslcertificate
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: service
servicePort: 80
path: /
this is my application service
apiVersion: v1
kind: Service
metadata:
name: service
labels:
name: demo
version: v1
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
name: demo
type: NodePort
this is my ingress controller
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
and here is my laravel application deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo-rc
labels:
name: demo
version: v1
spec:
strategy:
type: Recreate
template:
metadata:
labels:
name: demo
version: v1
spec:
containers:
- image: gcr.io/projectname/laravelapp:vx
name: app-pod
ports:
- containerPort: 8080
I tried to add the domain entry to the hosts file but with no luck !!
is there a specific configurations I have to add to the configmap.yaml file for the nginx ingress controller?
In short, to be able to reach your application via external domain name (singapore.smartlabplatform.com), you need to create a A DNS record for GCP L4 Load Balancer's external IP address (this is in other words EXTERNAL-IP of your default nginx-ingress-controller's Service), here seen as pending:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP
nginx-ingress-controller LoadBalancer 10.7.248.226 pending
nginx-ingress-default-backend ClusterIP 10.7.245.75 none
how to do this? it's explained on the GKE tutorials page here.
In the current state of your environment you can only reach your application in two ways:
From outside, via Load Balancer EXTERNAL-IP:
From inside, your Kubernetes cluster using laravel-kubernetes-demo service dns name:
$ curl laravel-kubernetes-demo.default.svc.cluster.local
<title>Laravel Kubernetes Demo :: LearnK8s</title>
If you want all that magic, like the automatic creation of DNS records, happen along with appearance of host: domain.com in your ingress resource spec, you should use external-dns (makes Kubernetes resources discoverable via public DNS servers), and here is the tutorial on how to set it up specifically for GKE.

How to get Kubernetes Ingress to terminate SSL and proxy to service?

I have a centos7 deployment with kubernetes on bare metal. Everything works great. However, i would like to get an Ingress working. so in brief what i want to do is to terminate the SSL from within the Ingress and have plain http between the ingress and my service. this is what i did:
1) i hack weave to allow hostNetwork
2) i have an ingress controller set up as per:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
k8s-app: nginx-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
hostNetwork: true
terminationGracePeriodSeconds: 60
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
role: edge-router
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --enable-ssl-passthrough
# - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
volumeMounts:
- name: tls-dhparam-vol
mountPath: /etc/nginx-ssl/dhparam
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
Note the DaemonSet and the nodeSelector. Also the hostNetwork = true so that my kubernetes nodes will open up 80 and 443 to listen for routing).
So i attempt to go to http://foo.bar.com and unsurprisingly, nothing. i just get the default backend - 404 page. i need the ingress rule....
3) so i create a Ingress rule like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hub
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.org/ssl-services: "hub"
spec:
tls:
- hosts:
- foo.bar.com
secretName: tls-dhparam
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: hub
servicePort: 8000
So it works great!... for http... when i go to my node at http://foo.bar.com i can access my service (hub) and can log on. however, as one has to log on it only makes sense to enforce https....
so my problem is that when i switch my browser over to https://foo.bar.com, i end up with a the default backend - 404 page.
looking at the cert presented in the above, i see that it is one created by kubernetes:
Kubernetes Ingress Controller Fake Certificate
Self-signed root certificate
checking my secrets:
$ kubectl -n ingress-nginx get secrets
NAME TYPE DATA AGE
default-token-kkd2j kubernetes.io/service-account-token 3 12m
nginx-ingress-serviceaccount-token-7f2sq kubernetes.io/service-account-token 3 12m
tls-dhparam Opaque 1 8m
what am i doing wrong?
The issue was that using a pem file didn't seem to work (and there was no noticeable error associated with it).
It worked when I switched over to a TLS cert/key via:
kubectl create secret tls tls-certificate --key my.key --cert my.cer
In your example, it looks like your Ingress doesn't explicitly declare metadata.namespace. If it is ending up in the default namespace while the tls-dhparam Secret is in the ingress-nginx namespace that would be the problem. The tls secrets for Ingresses should be in the same namespace as the Ingress.

Resources