Good day!
I bumped into this post:
https://cleverbuilder.com/articles/spring-boot-kubernetes/
And It seems I am able to run my spring-boot RESTful application as it is showing in the Minikube Dashboard. The problem is when I try to execute kubectl get pods or kubectl get svc I am not seeing my application. The kubernetes namespace that my application using is test. I'm really puzzled now on how can I access my application. Please shed some light. Thanks!
Run, kubectl get pods -n test, kubectl get svc -n test and this should show you the desired output.
By default, Kubernetes starts with the following three namespaces: Default: Catch-all namespace for all objects not belonging to either of the kube-public or kube-system namespaces. The default namespace is used to hold the default set of pods, services, and deployments used by the cluster. Since, your pod is in a custom namespace test (you created). You'll need to specify the namespace where your deployment is been created or pod is been deployed.
So, kubectl get pods is actually kubectl get pods -n default meaning show pods in default namespace. hence, doing kubectl get pods -n test will show you all your pods in test namespace.
See Documentation
Related
I'm running kubernetes in azure.
I want to delete a specific deployment, with AZ AKS or kubectl.
The only info I've found is how to delete pods, but this is not what I'm looking for, since pods will regenerate once deleted.
I know I just can go to the ui and delete the deployment but i want to do it with az aks or kubectl.
I've run
kubectl get all -A
Then I copy the name of the deployment that I want to delete and run:
kubectl delete deployment zr-binanceloggercandlestick-btcusdt-2hour
kubectl delete deployment deployment.apps/zr-binanceloggercandlestick-btcusdt-12hour
but noting no success, i get these errors:
Error from server (NotFound): deployments.extensions "zr-binanceloggercandlestick-btcusdt-2hour" not found
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource/<resource_name>' instead of 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource resource/<resource_name>'
Find out all deployments across all namespaces
kubectl get deploy -A
Then delete a deployment with deploymentname from namespace. deploymentname can be found from above command.
kubectl delete deploy deploymentname -n namespacename
Docs on how to configure kubectl to connect to AKS.
Use the below command.
kubectl delete deployment deployment-name-here
More about the command here.
I'm unable to deploy a new image to my service. I'm trying to run this command in my CI environment:
$ kubectl set image deployment/dev2 \
dev2=gcr.io/at-dev-253223/dev#sha256:3c6bc55e279549a431a9e2a316a5cddc44108d9d439781855a3a8176177630f0
I get unable to find container named "dev2"
I'll upload my registry and pods and services, not sure why I can't just pass the new image.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dev2-6fdf8d4fb5-hnrnv 1/1 Running 0 7h19m
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dev2 LoadBalancer hidden hidden 80:32594/TCP 6h55m
kubernetes ClusterIP hidden <none> 443/TCP 2d3h
To get past the particular problem you're seeing, you need to do:
$ kubectl set image deployment/dev2 \
661d1a428298276f69028b3e8e2fd9a8c1690095=gcr.io/at-dev-253223/dev#sha256:3c6bc55e279549a431a9e2a316a5cddc44108d9d439781855a3a8176177630f0
instead of
$ kubectl set image deployment/dev2 \
dev2=gcr.io/at-dev-253223/dev#sha256:3c6bc55e279549a431a9e2a316a5cddc44108d9d439781855a3a8176177630f0
A Deployment consists of multiple replicas of the same Pod template. A Pod can have many containers, so if you're trying to set the image, you have to specify which container's image you want to set. You only have one container, and surprisingly its name is 661d1a428298276f69028b3e8e2fd9a8c1690095, so that's what has to go in front of the = sign.
That will fix the unable to find container named "dev2" error. I have some doubt that the image you're setting it to is correct. The current image being used is:
gcr.io/at-dev-253223/661...095#sha256:0fd...20a
You're trying to set it to:
gcr.io/at-dev-253223/dev#sha256:3c6...0f0
The general pattern is:
[HOSTNAME]/[PROJECT-ID]/[IMAGE]#[IMAGE_DIGEST]
(see here). This means you're not just taking a new digest of a given image, but changing the image entirely from 661d1a428298276f69028b3e8e2fd9a8c1690095 to dev. You will have to decide for yourself that's actually what you intend to do or not.
Describe your pod with kubectl describe pod dev2-6fdf8d4fb5-hnrnv and make sure that the name of the container is indeed named dev2
The instructions say to install it, but doesn't suggest which method will work, in what order or with which load balancer. I keep getting useless test certificates installed on my deployments.
It sounds like you are getting the default chart from helm, which is slightly different than the instructions you will find on the kube-lego Github. The helm chart uses Let's Encrypt's staging server by default. Also if you are not familiar with helm it can be quite tricky to undo the remnants that the kube-lego helm chart leaves behind, specially as tiller likes to keep the old installation history.
Here's a brief overview of setting up Gitlab Auto Devops with Google Kubernetes Engine. I should mention that you can skip up to step 13 if you use the Gitlab GKE wizard introduced in 10.1, but it won't really tell you what's happening or how to debug when things go wrong, also the wizard is a one way process.
In Gitlab under integrations you'll find a Kubernetes integration button
You can also find this under CI/ CD -> cluster
The API URL is the endpoint ip prefixed by https://
The "CA Certificate" it asks for is service account CA, not the same as the cluster CA
To connect to gitlab you'll need to create a k8s service account and get the CA and token for it. Do so by using gcloud to authenticate kubectl. GCE makes it easy by doing this for you through the "connect" button in k8s engine
https://kubernetes.io/docs/admin/authentication/#service-account-tokens
All commands must be run with a custom namespace, it will not work with default
kubectl create namespace (NS)
kubectl create serviceaccount (NAME) --namespace=(NS)
This will create two tokens in your configs/secrets, a default one and you service account one
kubectl get -o json serviceaccounts (NAME) --namespace=(NS)
kubectl get -o json secret (secrets-name-on-prev-result) --namespace=(NS)
To decode the base64 values echo them to base64 -d, for example
Echo mybase64stringasdfasdf= | base64 -d
To install helm, use the installation script method
https://docs.helm.sh/using_helm/#from-script
Init helm and update it's repo
helm init
helm repo update
Install nginx-ingress, its ingress with nginx. You could use the Google load balancer as well, but it's not as portable.
helm install stable/nginx-ingress
Make a wild card subdomain with an A record pointing to the IP address set up by ingress
In Gitlab, Make auto devops use your newly setup wildcard subdomain, if its "*.x" on "me.com" you should enter "x.me.com" in the auto devops settings.
Now install kube lego
helm install --name lego-main \
--set config.LEGO_EMAIL=CHANGEMENOW#example.com \
--set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory \
stable/kube-lego
Helm installations are called charts, equal to a k8s container installation
If you wish to delete a release, you must purge it, otherwise tiller will keep a history, with:
helm delete --purge my-release-name
You can find the release names and their associated chart in
helm list
Troubleshooting
Order doesn't seem to matter too much. attaching to a pod can be a useful way of debugging problems, such as a bad email address. The Ideal order however is probably, nginx-ingress, then kube-lego, then gitlab. I did make it work with gitlab first, then nginx-ingress, then kube-lego.
I heard from Sid that they are working to make this easier... let's hope so.
I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.
I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??
Or is this not the right way to use Helm from a CI/CD server?
Helm works by using the installed kubectl to talk to your cluster. That means that if you can access your cluster via kubectl, you can use helm with that cluster.
Don't forget to make sure you're using to proper context in case you have more than one cluster in you kubcfg file. You can check that by running kubectl config current-context and comparing that to the cluster details in the kubecfg.
You can find more details in Helm's docs, check the quick start guide for more information.
why not just run your CI server inside your kubernetes cluster then you don't have to manage secrets for accessing the cluster? We do that on Jenkins X and it works great - we can run kubectl or helm inside pipelines just fine.
In this case you will want to install kubectl on whichever slave or agent you have identified for use by your CI/CD server, OR install kubectl on-the-fly in your automation, AND then make sure you have OR are able to generate a kubeconfig to use.
To answer the question:
But how do I create a kube-config file for the service account ...
You can set new clusters, credentials, and contexts for use with kubectl in a default or custom kubeconfig file using kubectl config set-cluster, kubectl config set-credentials, and kubectl config set-context. If you have KUBECONFIG env variable set and pointing to a kubeconfig file, that works or when setting new entries simply pass -kubeconfig to point to a custom file.
Here's the relevant API documentation for v1.6.
We created helmsman which provides you with declarative syntax to manage helm charts in your cluster. It configures kubectl (and therefore helm) for you wherever you run it. It can also be used from a docker container.
Where can I find an official template which describes how to create your .yaml file to setup services/pods in Kubernetes?
You can find the specification for a pod here http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_pod
A good starting point is also the examples https://github.com/kubernetes/kubernetes/tree/master/examples
Additional you can create the ressource via kubectl and export the resource to yaml
e.g. for a pod you can run this command:
kubectl run nginx --image=nginx
kubectl get pods ngnix -o yaml > pod.yaml