It seems that my spring application is calling the graceful shutdown twice in k8s deployment. Does anyone had similar issue ?
{"level":"INFO","message":"Commencing graceful shutdown. Waiting for active requests to complete","logger":"org.springframework.boot.web.embedded.tomcat.GracefulShutdown","thread":"SpringApplicationShutdownHook"}
{"level":"INFO","message":"Graceful shutdown complete","logger":"org.springframework.boot.web.embedded.tomcat.GracefulShutdown","thread":"tomcat-shutdown"}
{"level":"INFO","message":"Commencing graceful shutdown. Waiting for active requests to complete","logger":"org.springframework.boot.web.embedded.tomcat.GracefulShutdown","thread":"SpringApplicationShutdownHook"}
{"level":"INFO","message":"Graceful shutdown complete","logger":"org.springframework.boot.web.embedded.tomcat.GracefulShutdown","thread":"tomcat-shutdown"}
{"level":"INFO","message":"Closing JPA EntityManagerFactory for persistence unit 'default'","traceId":"","spanId":"","requestId":"","user":"","logger":"org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean","thread":"SpringApplicationShutdownHook"}
My application properties:
spring:
lifecycle:
timeout-per-shutdown-phase: 20s
server:
shutdown: graceful
My simplified deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
template:
spec:
containers:
- name: my-container
image: my-api
ports:
- name: container-port
containerPort: 8080
- name: metrics
containerPort: 8081
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.30.0
command:
- "/cloud_sql_proxy"
- "-ip_address_types=PRIVATE"
- "-structured_logs"
- "-verbose=false"
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
Related
I have in my application a shell that I need to execute once the container is started and that it remains in the background, I have seen using lifecycle but it does not work for me
ports:
- name: php-port
containerPort: 9000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "sh /root/script.sh"]
I need an artisan execution to stay in the background once the container is started
When the lifecycle hooks (e.g. postStart) do not work for you, you could add another container to your pod, which runs parallel to your main container (sidecar pattern):
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: main
image: some/image
...
- name: sidecar
image: another/container
If your 2nd container should only start after your main container started successfully, you need some kind of notification. This could be for example that the main container creates a file on a shared volume (e.g. an empty dir) for which the 2nd container waits until it starts it's main process. The docs have an example about a shared volume for two containers in the same pod. This obviously requires to add some additional logic to the main container.
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: main
image: some/image
volumeMounts:
- name: shared-data
mountPath: /some/path
- name: sidecar
image: another/image
volumeMounts:
- name: shared-data
mountPath: /trigger
command: ["/bin/bash"]
args: ["-c", "while [ ! -f /trigger/triggerfile ]; do sleep 1; done; ./your/2nd-app"]
You can try using something like supervisor
http://supervisord.org/
We use that to start the main process and a monitoring agent in the background so we get metrics out of it. supervisor would also ensure those processes stay up if they crash or terminate.
My client has a deployment requiring the following three items:
A Laravel app running on PHP Artisan server, port 8080.
A websockets server running via the LaravelWebSockets library (built in to Laravel application), port 6001.
A mysql database, port 3306.
The deployment is currently running with items 1 and 3. I would like to add item 2 (the websockets server).
I'd like to use a container for each of the above, in the same pod. It doesn't make any sense to me to create an entirely new deployment just to host the websockets server.
Since the proposed websockets server runs off the same Dockerized application that the Artisan server does, I am using the same image to build a matching container, using a different port and a different CMD.
Is this a good way to approach this, or is there a better way? Here is my Kubernetes file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zebra-master
labels:
app: zebra-master
spec:
replicas: 1
selector:
matchLabels:
app: zebra-master
template:
metadata:
labels:
app: zebra-master
spec:
containers:
- name: zebra-master
image: registry/zebra-master:build-BUILDNUMBER
ports:
- containerPort: 8080
command: ["php artisan serve --host=0.0.0.0 --port=8080 -vvv"]
- name: websockets-master
image: registry/zebra-master:build-BUILDNUMBER
ports:
- containerPort: 6001
command: ["php artisan websockets:serve"]
- name: mysql
image: mysql/mysql-server:5.7
ports:
- containerPort: 3306
volumeMounts:
...
restartPolicy: Always
volumes:
...
---
apiVersion: v1
kind: ConfigMap
...
---
apiVersion: v1
kind: Service
metadata:
name: zebra-master
annotations:
field.cattle.io/targetWorkloadIds: '["deployment:default:zebra-master"]'
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
workload: "zebra-master"
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: zebra-master
spec:
rules:
- host: zebra.com
http:
paths:
- backend:
serviceName: zebra-master
servicePort: 8080
path: /
if i undestand, you want to run the websocket on the zebra-master, and run the php project on the same time, i saw that you are using "command".
i found this maybe it should help you : How to set multiple commands in one yaml file with Kubernetes?
and for the port you can add this lines :
ports:
- containerPort: 8080
name: http
- containerPort: 6001
name: websocket-port
in my case i will rebuild the images entrypoint with the websocket command.
I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.
So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.
How can I debug this?
For reference, here is my nginx location:
location / {
# Trick to avoid nginx aborting at startup (set server in variable)
set $upstream_server ${APP_SERVER};
include uwsgi_params;
uwsgi_pass $upstream_server;
uwsgi_read_timeout 300;
uwsgi_intercept_errors on;
}
Here is my wsgi.ini:
[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true
uid = www-data
gid = www-data
Here is the kubernetes deployment.yaml for nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: nginx
name: nginx
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: nginx
strategy:
type: Recreate
template:
metadata:
labels:
service: nginx
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: nginx
image: <custom image url>
imagePullPolicy: Always
env:
- name: APP_SERVER
valueFrom:
secretKeyRef:
name: my-environment-config
key: APP_SERVER
- name: FK_SERVER_NAME
valueFrom:
secretKeyRef:
name: my-environment-config
key: SERVER_NAME
ports:
- containerPort: 80
- containerPort: 10443
- containerPort: 10090
resources:
requests:
cpu: 1m
memory: 200Mi
volumeMounts:
- mountPath: /etc/letsencrypt
name: my-storage
subPath: nginx
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-nginx
- name: dshm
emptyDir:
medium: Memory
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "nginx-port-80"
port: 80
targetPort: 80
protocol: TCP
- name: "nginx-port-443"
port: 443
targetPort: 10443
protocol: TCP
- name: "nginx-port-10090"
port: 10090
targetPort: 10090
protocol: TCP
selector:
service: nginx
Here is the kubernetes deployment.yaml for python flask:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: my-app
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: my-app
image: <custom image url>
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
cpu: 1m
memory: 100Mi
volumeMounts:
- name: merchbot-storage
mountPath: /app/data
subPath: my-app
- name: dshm
mountPath: /dev/shm
- name: local-config
mountPath: /app/secrets/local_config.json
subPath: merchbot-local-config-test.json
restartPolicy: Always
volumes:
- name: merchbot-storage
persistentVolumeClaim:
claimName: my-storage-claim-app
- name: dshm
emptyDir:
medium: Memory
- name: local-config
secret:
secretName: my-app-local-config
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: my-app
name: my-app
spec:
ports:
- name: "my-app-port-5000"
port: 5000
targetPort: 5000
selector:
service: my-app
Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.
A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.
Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.
Where you have python available you can test with uswgi_curl
pip install uwsgi-tools
uwsgi_curl hostname:port /path
Otherwise nc/curl will suffice, to a point.
Pod to localhost
First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl
kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path
Pod to Pod/Service
Next include the kubernetes networking. Start with IP's and finish with names.
Less likely to have python here, or even nc but I think testing the environment variables is important here:
kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000
echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or
uwsgi_curl $APP_SERVER:5000 /path
Debug Pod to Pod/Service
If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.
In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.
Node to Pod/Service
Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:
nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>
In this case:
nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000
I am curently working with kubernetes, and trying to make a development process making it capable for developer to access services within a local kubernetes cluster? I would like it to keep it simple, and have for now tried kubectl port-forward kafka 10000:9092 but this didn't seem to expose the pod to localhost:10000..
I've tried converting the kafka service to a nodeport, still with no luck - only way I could access it was by creating my application as dockerized application, and run the application in a docker container
- meaning that running the exe file would not connect to it, but executing it using docker would make it work.
I've tried Kubectl proxy - which doesn't work either - I am not able to ping the clusterIp.
I have not tried with ingress or loadbalancer, as I find it a bit too elaborate, considering that this is only for developing purposes, and not something that should be production "secure"..
How do i easily expose the kafka service such that a console application on my laptop running kubernetes cluster locally can access it?
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
clusterIP: None
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: kafka
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: 9092
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
template:
metadata:
labels:
app: kafka
spec:
containers:
- command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092 && \
exec /etc/confluent/docker/run
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: KAFKA_HEAP_OPTS
value: -Xmx1G -Xms1G
- name: KAFKA_ZOOKEEPER_CONNECT
value: leader-zookeeper:2181
- name: KAFKA_LOG_DIRS
value: /opt/kafka/data/logs
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
image: confluentinc/cp-kafka:latest
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -ec
- /usr/bin/jps | /bin/grep -q SupportedKafka
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kafka-broker
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: kafka
timeoutSeconds: 5
ports:
- containerPort: 9092
name: kafka
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/kafka/data
name: datadir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: kafka-pdb
name: kafka-pdb
spec:
maxUnavailable: 0
selector:
matchLabels:
app: kafka
---
To port forward to a service, you need to use svc/ infront of the name. So your command would be either kubectl port-forward svc/kafka 10000:9092 or kubectl port-forward kafka-0 10000:9092
On Windows, make sure windows firewall is not blocking kubectl.
Reference:
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
You can also use telepresence to debug a existing service on cluster by swapping it local development version.
Install telepresence and use telepresence --swap-deployment $DEPLOYMENT_NAME
Reference:
https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/#developing-or-debugging-an-existing-service
https://www.telepresence.io/reference/install
If I understand you correctly I have some additional options for you to check:
This answer uses an idea of externalTrafficPolicy: Local alongside other possible solutions.
I see from the comments that you are using Docker Desktop for Windows. You can try to use type: LoadBalancer service instead of ClusterIP or NodePort. I know it may sound wiered but I have seen few examples like this one showing that it actually works.
I am posting this as a community answer because the proposed solutions were not originally my ideas.
Please let me know if that helped.
I am new to jaeger and I am facing issues with finding the services list in the jaeger UI.
Below are the .yaml configurations I prepared to run jaeger with my spring boot app on Kubernetes using minikube locally.
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/elasticsearch.yml --namespace=kube-system
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/jaeger-production-template.yml --namespace=kube-system
Created deployment for my spring boot app and jaeger agent to run on the same container
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tax-app-deployment
spec:
template:
metadata:
labels:
app: tax-app
version: latest
spec:
containers:
- image: tax-app
name: tax-app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- image: jaegertracing/jaeger-agent
imagePullPolicy: IfNotPresent
name: jaeger-agent
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 5778
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
command:
- "/go/bin/agent-linux"
- "--collector.host-port=jaeger-collector.jaeger-infra.svc:14267"
And the spring boot app service yaml
apiVersion: v1
kind: Service
metadata:
name: tax
labels:
app: tax-app
jaeger-infra: tax-service
spec:
ports:
- name: tax-port
port: 8080
protocol: TCP
targetPort: 8080
clusterIP: None
selector:
jaeger-infra: jaeger-tax
I am getting
No service dependencies found
Service graph data must be generated in Jaeger. Currently it's possible with via a Spark job here: https://github.com/jaegertracing/spark-dependencies