I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.
So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.
How can I debug this?
For reference, here is my nginx location:
location / {
# Trick to avoid nginx aborting at startup (set server in variable)
set $upstream_server ${APP_SERVER};
include uwsgi_params;
uwsgi_pass $upstream_server;
uwsgi_read_timeout 300;
uwsgi_intercept_errors on;
}
Here is my wsgi.ini:
[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true
uid = www-data
gid = www-data
Here is the kubernetes deployment.yaml for nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: nginx
name: nginx
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: nginx
strategy:
type: Recreate
template:
metadata:
labels:
service: nginx
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: nginx
image: <custom image url>
imagePullPolicy: Always
env:
- name: APP_SERVER
valueFrom:
secretKeyRef:
name: my-environment-config
key: APP_SERVER
- name: FK_SERVER_NAME
valueFrom:
secretKeyRef:
name: my-environment-config
key: SERVER_NAME
ports:
- containerPort: 80
- containerPort: 10443
- containerPort: 10090
resources:
requests:
cpu: 1m
memory: 200Mi
volumeMounts:
- mountPath: /etc/letsencrypt
name: my-storage
subPath: nginx
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-nginx
- name: dshm
emptyDir:
medium: Memory
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "nginx-port-80"
port: 80
targetPort: 80
protocol: TCP
- name: "nginx-port-443"
port: 443
targetPort: 10443
protocol: TCP
- name: "nginx-port-10090"
port: 10090
targetPort: 10090
protocol: TCP
selector:
service: nginx
Here is the kubernetes deployment.yaml for python flask:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: my-app
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: my-app
image: <custom image url>
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
cpu: 1m
memory: 100Mi
volumeMounts:
- name: merchbot-storage
mountPath: /app/data
subPath: my-app
- name: dshm
mountPath: /dev/shm
- name: local-config
mountPath: /app/secrets/local_config.json
subPath: merchbot-local-config-test.json
restartPolicy: Always
volumes:
- name: merchbot-storage
persistentVolumeClaim:
claimName: my-storage-claim-app
- name: dshm
emptyDir:
medium: Memory
- name: local-config
secret:
secretName: my-app-local-config
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: my-app
name: my-app
spec:
ports:
- name: "my-app-port-5000"
port: 5000
targetPort: 5000
selector:
service: my-app
Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.
A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.
Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.
Where you have python available you can test with uswgi_curl
pip install uwsgi-tools
uwsgi_curl hostname:port /path
Otherwise nc/curl will suffice, to a point.
Pod to localhost
First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl
kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path
Pod to Pod/Service
Next include the kubernetes networking. Start with IP's and finish with names.
Less likely to have python here, or even nc but I think testing the environment variables is important here:
kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000
echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or
uwsgi_curl $APP_SERVER:5000 /path
Debug Pod to Pod/Service
If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.
In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.
Node to Pod/Service
Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:
nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>
In this case:
nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000
Related
I can't for the life of me get this to connect.
It is a golang application using Kubernetes.
The docker file runs just fine, the pod launches but the connection times out.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30008
selector:
app: ark-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ark-backend
namespace: ark
spec:
replicas: 1
selector:
matchLabels:
app: ark-api
template:
metadata:
labels:
app: ark-api
spec:
imagePullSecrets:
- name: regcred
containers:
- name: ark-api-container
image: xxx
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: web
containerPort: 8080
I am able to boot the docker container just fine and it runs.
Turns out the container gets terminated and I have no idea why.
You could check wheather the port 8080 is listening inside the container
kubectl exec -it <pod_namen> -n <namespace> -- netstat -ntpl
if there is no netstat command in the container, you could try to build a base image with it.
Check whether the port 30080 is listening on the node. Run the following command on the node
netstat -ntpl | grep 30080
Also you could try not to specify the node port in the service yaml, let the kubernetes to choose the nodeport for you. That could avoid to specify the port which is already using in your node.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: ClusterIP
selector:
component: ark-api
ports:
- port: 80
targetPort: 8080
Try using clusterIP instead of nodeport, if you are using any kind of Ingress then you have to create rules in your ingress config so It can expose your service to the outside web via your load balancer.
I deleted the service and used port forwarding and was able to boot everything. I'll have to circle back to the service to try and figure it out.
My client has a deployment requiring the following three items:
A Laravel app running on PHP Artisan server, port 8080.
A websockets server running via the LaravelWebSockets library (built in to Laravel application), port 6001.
A mysql database, port 3306.
The deployment is currently running with items 1 and 3. I would like to add item 2 (the websockets server).
I'd like to use a container for each of the above, in the same pod. It doesn't make any sense to me to create an entirely new deployment just to host the websockets server.
Since the proposed websockets server runs off the same Dockerized application that the Artisan server does, I am using the same image to build a matching container, using a different port and a different CMD.
Is this a good way to approach this, or is there a better way? Here is my Kubernetes file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zebra-master
labels:
app: zebra-master
spec:
replicas: 1
selector:
matchLabels:
app: zebra-master
template:
metadata:
labels:
app: zebra-master
spec:
containers:
- name: zebra-master
image: registry/zebra-master:build-BUILDNUMBER
ports:
- containerPort: 8080
command: ["php artisan serve --host=0.0.0.0 --port=8080 -vvv"]
- name: websockets-master
image: registry/zebra-master:build-BUILDNUMBER
ports:
- containerPort: 6001
command: ["php artisan websockets:serve"]
- name: mysql
image: mysql/mysql-server:5.7
ports:
- containerPort: 3306
volumeMounts:
...
restartPolicy: Always
volumes:
...
---
apiVersion: v1
kind: ConfigMap
...
---
apiVersion: v1
kind: Service
metadata:
name: zebra-master
annotations:
field.cattle.io/targetWorkloadIds: '["deployment:default:zebra-master"]'
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
workload: "zebra-master"
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: zebra-master
spec:
rules:
- host: zebra.com
http:
paths:
- backend:
serviceName: zebra-master
servicePort: 8080
path: /
if i undestand, you want to run the websocket on the zebra-master, and run the php project on the same time, i saw that you are using "command".
i found this maybe it should help you : How to set multiple commands in one yaml file with Kubernetes?
and for the port you can add this lines :
ports:
- containerPort: 8080
name: http
- containerPort: 6001
name: websocket-port
in my case i will rebuild the images entrypoint with the websocket command.
I am curently working with kubernetes, and trying to make a development process making it capable for developer to access services within a local kubernetes cluster? I would like it to keep it simple, and have for now tried kubectl port-forward kafka 10000:9092 but this didn't seem to expose the pod to localhost:10000..
I've tried converting the kafka service to a nodeport, still with no luck - only way I could access it was by creating my application as dockerized application, and run the application in a docker container
- meaning that running the exe file would not connect to it, but executing it using docker would make it work.
I've tried Kubectl proxy - which doesn't work either - I am not able to ping the clusterIp.
I have not tried with ingress or loadbalancer, as I find it a bit too elaborate, considering that this is only for developing purposes, and not something that should be production "secure"..
How do i easily expose the kafka service such that a console application on my laptop running kubernetes cluster locally can access it?
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
clusterIP: None
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
template:
metadata:
labels:
app: kafka
spec:
containers:
- command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092 && \
exec /etc/confluent/docker/run
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: KAFKA_HEAP_OPTS
value: -Xmx1G -Xms1G
- name: KAFKA_ZOOKEEPER_CONNECT
value: leader-zookeeper:2181
- name: KAFKA_LOG_DIRS
value: /opt/kafka/data/logs
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
image: confluentinc/cp-kafka:latest
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -ec
- /usr/bin/jps | /bin/grep -q SupportedKafka
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kafka-broker
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
ports:
- containerPort: 9092
name: kafka
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/kafka/data
name: datadir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: kafka-pdb
name: kafka-pdb
spec:
maxUnavailable: 0
selector:
matchLabels:
app: kafka
---
To port forward to a service, you need to use svc/ infront of the name. So your command would be either kubectl port-forward svc/kafka 10000:9092 or kubectl port-forward kafka-0 10000:9092
On Windows, make sure windows firewall is not blocking kubectl.
Reference:
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
You can also use telepresence to debug a existing service on cluster by swapping it local development version.
Install telepresence and use telepresence --swap-deployment $DEPLOYMENT_NAME
Reference:
https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/#developing-or-debugging-an-existing-service
https://www.telepresence.io/reference/install
If I understand you correctly I have some additional options for you to check:
This answer uses an idea of externalTrafficPolicy: Local alongside other possible solutions.
I see from the comments that you are using Docker Desktop for Windows. You can try to use type: LoadBalancer service instead of ClusterIP or NodePort. I know it may sound wiered but I have seen few examples like this one showing that it actually works.
I am posting this as a community answer because the proposed solutions were not originally my ideas.
Please let me know if that helped.
I have a spring boot application which is deployed in Kubernetes on local windows machine using minikube. I also have Elasticsearch running on my local machine (http://localhost:9200).
I want to call Elasticsearch REST endpoints from this spring boot app.
I tried solving this by creating a service without selector but not sure what am i missing.
When accessing the spring boot app using http://#minikube_ip#:#Node_Port#, i get an error "No route to host".
i tried doing minikube ssh and executing curl command, from there also i get the same error. Clearly I am missing something here.
application.yaml
elasticsearch:
hosts:
- http://my-es:80
connectTimeout: 10000
connectionRequestTimeout: 10000
socketTimeout: 10000
maxRetryTimeoutMillis: 60000
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-es-app
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kube-es-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kube-es-app
spec:
containers:
- image: elastic-search-app:latest
imagePullPolicy: Never
name: kube-es-app
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
name: my-es
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9200
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-es
subsets:
- addresses:
- ip: <MY_LOCAL_MACHINE_IP>
ports:
- port: 9200
Commands I executed
docker build -t elastic-search-app .
kubectl create -f deployment.yaml
kubectl expose deployment/kube-es-app --type="NodePort" --port 8080
Can anyone help please? I am stuck
If I've got the description right, the Windows machine should have vbox network adapter connected to the Host-only-network the Minikube VM is connected to.
Minikube can access the host machine directly because both are in the same network.
The Minikube is in charge of NAT-ting packages from Pods outside. What you need is to allow Elasticsearch to listen to the vbox- or all interfaces, and enable its port in the Windows firewall. Then the Elasticsearch should be available via IP address of Windows in the Host-only-network.
Apart from that, you might create a service (if you need go by name instead of IP) as discussed here:
Connect to local database from inside minikube cluster,
Minikube:Exposing mysql as a service on localhost.
I have a centos7 deployment with kubernetes on bare metal. Everything works great. However, i would like to get an Ingress working. so in brief what i want to do is to terminate the SSL from within the Ingress and have plain http between the ingress and my service. this is what i did:
1) i hack weave to allow hostNetwork
2) i have an ingress controller set up as per:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
k8s-app: nginx-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
hostNetwork: true
terminationGracePeriodSeconds: 60
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
role: edge-router
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --enable-ssl-passthrough
# - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
volumeMounts:
- name: tls-dhparam-vol
mountPath: /etc/nginx-ssl/dhparam
volumes:
- name: tls-dhparam-vol
secret:
secretName: tls-dhparam
Note the DaemonSet and the nodeSelector. Also the hostNetwork = true so that my kubernetes nodes will open up 80 and 443 to listen for routing).
So i attempt to go to http://foo.bar.com and unsurprisingly, nothing. i just get the default backend - 404 page. i need the ingress rule....
3) so i create a Ingress rule like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hub
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.org/ssl-services: "hub"
spec:
tls:
- hosts:
- foo.bar.com
secretName: tls-dhparam
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: hub
servicePort: 8000
So it works great!... for http... when i go to my node at http://foo.bar.com i can access my service (hub) and can log on. however, as one has to log on it only makes sense to enforce https....
so my problem is that when i switch my browser over to https://foo.bar.com, i end up with a the default backend - 404 page.
looking at the cert presented in the above, i see that it is one created by kubernetes:
Kubernetes Ingress Controller Fake Certificate
Self-signed root certificate
checking my secrets:
$ kubectl -n ingress-nginx get secrets
NAME TYPE DATA AGE
default-token-kkd2j kubernetes.io/service-account-token 3 12m
nginx-ingress-serviceaccount-token-7f2sq kubernetes.io/service-account-token 3 12m
tls-dhparam Opaque 1 8m
what am i doing wrong?
The issue was that using a pem file didn't seem to work (and there was no noticeable error associated with it).
It worked when I switched over to a TLS cert/key via:
kubectl create secret tls tls-certificate --key my.key --cert my.cer
In your example, it looks like your Ingress doesn't explicitly declare metadata.namespace. If it is ending up in the default namespace while the tls-dhparam Secret is in the ingress-nginx namespace that would be the problem. The tls secrets for Ingresses should be in the same namespace as the Ingress.