I have a container written in go. It deploys and runs on my DockerDesktop & on my Kubernetes cluster in DockerDesktop.
I have pushed the same container to Artefact Repository and it fails to deploy.
So I deployed it to CloudRun, and it works! Very confused.
My GKE cluster is autopilot so I assume the are no resource issues.
I expected to get a running container however i got
Cannot schedule pods: Insufficient cpu.
PodUnschedulable
Reason
Cannot schedule pods: Insufficient cpu.
Learn more
Source
gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp
gmail-sender-7944d6d4d4-kdlds
PodUnschedulable Cannot schedule pods: Insufficient memory.
My deployment file is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: gmail-sender
labels:
app: gmail-sender
spec:
replicas: 1
selector:
matchLabels:
app: gmail-sender
template:
metadata:
labels:
app: gmail-sender
spec:
containers:
- name: gmail-sender
image: europe-west2-docker.pkg.dev/ea-website-359514/gmail-sender/gmail-sender:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8099
---
apiVersion: v1
kind: Service
metadata:
name: gmail-sender-cluster-ip
labels:
app: gmail-sender
spec:
ports:
- port: 8099
protocol: TCP
Looking at the error it is clear that node doesnt have sufficient memory and cpu to schedule/run the workload. Check the node configuration and ensure that resources are available on the node to host the workload
Cannot schedule pods: Insufficient cpu.
PodUnschedulable Reason Cannot schedule pods: Insufficient cpu.
Learn more Source gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp gmail-sender-7944d6d4d4-kdlds PodUnschedulable
Cannot schedule pods: Insufficient memory.
Related
We have lunched .Net microservice on container and published it on EKS Cluster.
It's working fine on http.
We follow the link to deploy .Net microservice as deploy as a container.
https://dotnet.microsoft.com/en-us/learn/aspnet/microservice-tutorial/docker-file
We used below deploy.yaml
**---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mymicroservice
spec:
replicas: 1
template:
metadata:
labels:
app: mymicroservice
spec:
containers:
- name: mymicroservice
image: [YOUR DOCKER ID]/mymicroservice:latest
ports:
- containerPort: 80
env:
- name: ASPNETCORE_URLS
value: http://*:80
selector:
matchLabels:
app: mymicroservice
apiVersion: v1
kind: Service
metadata:
name: mymicroservice
spec:
type: LoadBalancer
ports:
port: 80
selector:
app: mymicroservice**
This exposed our microservices behind classic load balancer. It's working fine on Http.
but we facing challenges on HTTPS. How can this be achieved? If we need to use Nginx Ingress Controller how that yaml we can tune according to our deployment.yaml
I am trying to deploy a springboot application running on 8080 port. My target is to have https protocol for custom subdomain with google managed-certificates.
here are my yamls.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-deployment
namespace: my-namespace
template:
metadata:
labels:
app: my-deployment
namespace: my-namespace
spec:
containers:
- name: app
image: gcr.io/PROJECT_ID/IMAGE:TAG
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
ephemeral-storage: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
ephemeral-storage: "512Mi"
cpu: "250m"
2.service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
cloud.google.com/backend-config: '{"default": "my-http-health-check"}'
spec:
selector:
app: my-deployment
namespace: my-namespace
type: NodePort
ports:
- port: 80
name: http
targetPort: http
protocol: TCP
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-name-space
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ip
networking.gke.io/managed-certificates: my-cert
kubernetes.io/ingress.class: "gce"
labels:
app: my-ingress
spec:
rules:
- host: my-domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
name: http
I followed various documentation, most of them could help to make http work but, couldn't make https work and ends with error ERR_SSL_VERSION_OR_CIPHER_MISMATCH. Looks like there is issue with "Global forwarding rule". Ports shows 443-443. What is the correct way to terminate the HTTPS traffic at loadbalancer and route it to backend app with http?
From the information provided, I can see that the "ManagedCertificate" object is missing, you need to create a yaml file with the following structure:
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: my-cert
spec:
domains:
- <your-domain-name1>
- <your-domain-name2>
And then apply it with the command: kubectl apply -f file-name.yaml
Provisioning of the Google-managed certificate can take up to 60 minutes; you can check the status of the certificate using the following command: kubectl describe managedcertificate my-cert, wait for the status to be as "Active".
A few prerequisites you need to be aware, though:
You must own the domain name. The domain name must be no longer than
63 characters. You can use Google Domains or another registrar.
The cluster must have the HttpLoadBalancing add-on enabled.
Your "kubernetes.io/ingress.class" must be "gce".
You must apply Ingress and ManagedCertificate resources in the same
project and namespace.
Create a reserved (static) external IP address. Reserving a static IP
address guarantees that it remains yours, even if you delete the
Ingress. If you do not reserve an IP address, it might change,
requiring you to reconfigure your domain's DNS records.
Finally, you can take a look at the complete Google's guide on Creating an Ingress with a Google-managed certificate.
I am new to kubernetes.
I have implemented a webserver inside a pod and set a Nodeport service for that pod.
I want to send a POST request with a custom message (in json) to a pod after it has been created and ready to use. I want to use the go client library for that matter. Could you please let me know how I can do that?
Which part of the library come to help?
Thanks.
Say the go server runs on locally, you normally use http://localhost:3000 to access it. The pod then has a containerPort of 3000.
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-web-deployment
labels:
app: GoWeb
spec:
replicas: 1
selector:
matchLabels:
app: GoWeb
template:
metadata:
labels:
app: GoWeb
spec:
containers:
- name: go-web
image: me/go-web:1.0.1
ports:
- containerPort: 3000
The Service is then an abstraction of that pod, that describes how to access 1 or many Pods running that service.
The nodePort of the service is 31024.
apiVersion: v1
kind: Service
metadata:
name: go-web-service
spec:
type: NodePort
selector:
app: GoWeb
ports:
- port: 3000
nodePort: 31024
The application is published on http://node-ip:node-port for the public to consume. Kubernetes manages the mappings between the node and the container in the background.
| User | -> | Node:nodePort | -> | Pod:containerPort |
The Kubernetes internal Service and Pod IP's are not often available to the outside world (unless you specifically set a cluster up that way). Whereas the nodes themselves will often carry an IP address that is routable/contactable.
I used NFS for to mount a ReadWriteMany storage on a deployment on Google Kubernetes Engine as described in the following link-
https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266
However my particular use case(elasticsearch production cluster- for snapshots) requires mounting the ReadWriteMany volume on a stateful set.
On using the NFS volume created previously for stateful sets, the volumes are not provisioned for the different replicas of the stateful set.
Is there any way to overcome this or any other approach I can use?
The guide makes a small mistake depending on how you follow it. The [ClusterIP] defined in the persistent volume should be "nfs-server.default..." instead of "nfs-service.default...". "nfs-server" is what is used in the service definition.
Below is a very minimal setup I used for a statefulset. I deployed the first 3 files from the tutorial to create the PV & PVC, then used the below yaml in place of the busybox bonus yaml the author included. This deployed successfully. Let me know if you have troubles.
apiVersion: v1
kind: Service
metadata:
name: stateful-service
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: thestate
---
apiVersion: apps/v1
metadata:
name: thestate
labels:
app: thestate
kind: StatefulSet
spec:
serviceName: stateful-service
replicas: 3
selector:
matchLabels:
app: thestate
template:
metadata:
labels:
app: thestate
spec:
containers:
- name: nginx
image: nginx:1.8
volumeMounts:
- name: my-pvc-nfs
mountPath: /mnt
ports:
- containerPort: 80
name: web
volumes:
- name: my-pvc-nfs
persistentVolumeClaim:
claimName: nfs
I created a service and use nodeport etc but couldn't access the service.
I created a web-service.yaml file with the following content and used kubectl to create the Service:
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: web-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: webserver
and the webserver.yaml file with the following Deployment details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 3
template:
metadata:
labels:
run: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
In your deployment, label is run=webserver, but in your service, label is app=webserver. The service uses app=webserver as a Selector, through which it selects three pods that have the label "app" set to "webserver". In this case none of the pods has the label "app" so the deployment is not successfully exposed as a service. The label names and values in the deployment and service should match.