Yaml templates in Kubernetes - yaml

Where can I find an official template which describes how to create your .yaml file to setup services/pods in Kubernetes?

You can find the specification for a pod here http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_pod
A good starting point is also the examples https://github.com/kubernetes/kubernetes/tree/master/examples
Additional you can create the ressource via kubectl and export the resource to yaml
e.g. for a pod you can run this command:
kubectl run nginx --image=nginx
kubectl get pods ngnix -o yaml > pod.yaml

Related

How to delete a deployment / image in kubernetes

I'm running kubernetes in azure.
I want to delete a specific deployment, with AZ AKS or kubectl.
The only info I've found is how to delete pods, but this is not what I'm looking for, since pods will regenerate once deleted.
I know I just can go to the ui and delete the deployment but i want to do it with az aks or kubectl.
I've run
kubectl get all -A
Then I copy the name of the deployment that I want to delete and run:
kubectl delete deployment zr-binanceloggercandlestick-btcusdt-2hour
kubectl delete deployment deployment.apps/zr-binanceloggercandlestick-btcusdt-12hour
but noting no success, i get these errors:
Error from server (NotFound): deployments.extensions "zr-binanceloggercandlestick-btcusdt-2hour" not found
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource/<resource_name>' instead of 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource resource/<resource_name>'
Find out all deployments across all namespaces
kubectl get deploy -A
Then delete a deployment with deploymentname from namespace. deploymentname can be found from above command.
kubectl delete deploy deploymentname -n namespacename
Docs on how to configure kubectl to connect to AKS.
Use the below command.
kubectl delete deployment deployment-name-here
More about the command here.

Helm Install fails with Error: could not find tiller from within Azure Cloud Shell

Helm install called from the cloud shell worked last week but now regardless of using bash or powershell it returns
Error: could not find tiller
Last week I was able to create an ingress controller by following the Microsoft article Create an HTTPS ingress controller on Azure Kubernetes Service (AKS)
now when I get to the helm install step I get the error indicated in the title. To recreate this issue:
Do clouddrive unmount from within the powershell cloud shell.
Using the Azure portal delete your cloudshell file share in Azure Storage.
create a 2 node 'B2s' Kubernetes Service using Advanced networking using the provided defaults.
Open the Cloud Shell using either bash or Powershell.
Do az aks get-credentials and provide the name of your AKS cluster.
Do kubectl create namespace ingress-basic
Do helm repo add stable https://kubernetes-charts.storage.googleapis.com/
the command above will warn that you need to perform a helm init.
Do a az aks list to get the servicePrincipalProfile:clientId of your AKS cluster
Do helm init --service-account using the clientId of the command above for the service-account parameter
Do the helm install using the parameters from the Microsoft Docs Create an HTTPS ingress controller on Azure Kubernetes Service (AKS)
At this point you should get the error mentioned in the title.
Any suggestions on what I might be missing.
right, I think you need to actually create a service account in kubernetes in order for this to work, sample code:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
# Users in China: You will need to specify a specific tiller-image in order to initialize tiller.
# The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085.
# When initializing tiller, you'll need to pass in --tiller-image
helm init --service-account tiller \
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:<tag>
https://rancher.com/docs/rancher/v2.x/en/installation/options/helm2/helm-init/
whereas you are trying to use Azure Service Principal instead of Kubernetes Service Account. They are not the same thing.
OK two items.
Tiller was not installed. on 2/14/2020 I was able to install an ingress controller using helm. But on 2/18 I was not able to do a helm install --dry-run on a chart because of the above mentioned error. Anyway after following this documentation Install applications with Helm in Azure Kubernetes Service (AKS) I got Tiller installed properly.
After getting Tiller properly installed the helm install command for the ingress controller Create an HTTPS ingress controller on Azure Kubernetes Service (AKS) was failing with chart not found error.
I modified the command from
helm install nginx stable/nginx-ingress \
to
helm install stable/nginx-ingress \
and the chart is now deploying properly.
I am now back to where I was on 2/14. Thanks everyone for the help.
OK got the following letter from Microsoft Tech Support letting me know that there was a problem with Azure Cloud Shell. I did a Helm version this morning and now see
version.BuildInfo{Version:"v3.1.1",
GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8",
GitTreeState:"clean", GoVersion:"go1.13.8"}
Which I did not see on 2/14. So I looks like I was NOT crazy and that Microsoft has fixed this issue. Based on her letter the issue with the chart name should also be resolved.
Hi Brian, Thanks for your update. We are so sorry for the
inconvenience. There was an error in the script which builds the cloud
shell image. Helm released 2.16.3 more recently than 3.1 and the build
script picked that up as the 'latest' release, causing the inadvertent
downgrade. As helm v3 does not require tiller pod, so the tiller pod
cannot be found when using helm v2. The helm version will be
re-upgraded in the next release. As confirmed by you, the queries
related to this issue have been solved and hence I will go ahead and
archive your case at this time. Please remember that support for this
case does not end here. Should you need further assistance with your
issue, you can reach out to me and I will continue working with you.
Here is a summary of the key points of the case for your records.
Issue Definition: Helm install Error: could not find tiller.
Resolution/Suggestion Summary: You could refer to the follow the below document to configure helm V2:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-helm#install-an-application-with-helm-v2.
Alternatively, you may also use the command below to upgrade the helm
to version 3 as a workaround: curl https://aka.ms/cloud-shell-helm3 |
bash. The command ‘helm version’ can be used to verify the version of
Helm you have installed.
Besides, the syntax for helm 3 is different from helm 2. The syntax
for helm v3 is ‘helm install [NAME] [CHART] [flags]’ while for hem v2
is ‘helm install [CHART] [flags]’. So you need to remove the word
nginx after the word install if you are using helm v2. You could refer
to the following document for more information:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-helm#install-an-application-with-helm-v2
It was my pleasure to work with you on this service request. Please do
not hesitate to contact me if I can be of further assistance. Thank
you very much for your support of Microsoft Azure. Have a nice day😊!

Cannot access deployed spring-boot RESTful API in Minikube

Good day!
I bumped into this post:
https://cleverbuilder.com/articles/spring-boot-kubernetes/
And It seems I am able to run my spring-boot RESTful application as it is showing in the Minikube Dashboard. The problem is when I try to execute kubectl get pods or kubectl get svc I am not seeing my application. The kubernetes namespace that my application using is test. I'm really puzzled now on how can I access my application. Please shed some light. Thanks!
Run, kubectl get pods -n test, kubectl get svc -n test and this should show you the desired output.
By default, Kubernetes starts with the following three namespaces: Default: Catch-all namespace for all objects not belonging to either of the kube-public or kube-system namespaces. The default namespace is used to hold the default set of pods, services, and deployments used by the cluster. Since, your pod is in a custom namespace test (you created). You'll need to specify the namespace where your deployment is been created or pod is been deployed.
So, kubectl get pods is actually kubectl get pods -n default meaning show pods in default namespace. hence, doing kubectl get pods -n test will show you all your pods in test namespace.
See Documentation

Docker image to be reread by kubernetes google cloud

as I am green to that subject, could you pls. help.
I deploy docker image to gcloud kubernetes.
What to do to make the cluster reread the docker image when a new one would appear?
My code is:
sudo docker build -t gcr.io/${PROJECT_ID}/sf:$ENV .
sudo docker push gcr.io/${PROJECT_ID}/sf:$ENV
sudo gcloud container clusters create sf:$ENV --num-nodes=3
sudo kubectl run sfmill-web$ENV --image=gcr.io/${PROJECT_ID}/sf:$ENV --port 8088
sudo kubectl expose deployment sfmill-web$ENV --type=LoadBalancer --port 8088 --target-port 8088
kubectl set image deployment/sfmill-web$ENV sf=sf:$ENV
I encourage you to explore use Kubernetes configuration files to define resources.
You can explore the YAML for your deployment with:
kubectl get deployment/sfmill-web$ENV --output=yaml > ${PWD}/sfmill-web$ENV.yaml
You could then tweak the value of the image property and then reapply this to your cluster using:
kubectl apply --filename=${PWD}/sfmill-web$ENV.yaml
The main benefit to the configuration file approach is that you're effectively creating code to manage your infrastructure and, each time you change your code, you could check it into source control thereby knowing what you did at each stage.
Using kubectl is great but it makes it more challenging to recreate the cluster from scratch.... Which kubectl command did I perform next? Yes, you could (bash) script all your kubectl commands too which would help but configuration files remain the ideal solution.
HTH

How to use kubernetes helm from a CI/CD pipeline hosted outside the k8s cluster

I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.
I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??
Or is this not the right way to use Helm from a CI/CD server?
Helm works by using the installed kubectl to talk to your cluster. That means that if you can access your cluster via kubectl, you can use helm with that cluster.
Don't forget to make sure you're using to proper context in case you have more than one cluster in you kubcfg file. You can check that by running kubectl config current-context and comparing that to the cluster details in the kubecfg.
You can find more details in Helm's docs, check the quick start guide for more information.
why not just run your CI server inside your kubernetes cluster then you don't have to manage secrets for accessing the cluster? We do that on Jenkins X and it works great - we can run kubectl or helm inside pipelines just fine.
In this case you will want to install kubectl on whichever slave or agent you have identified for use by your CI/CD server, OR install kubectl on-the-fly in your automation, AND then make sure you have OR are able to generate a kubeconfig to use.
To answer the question:
But how do I create a kube-config file for the service account ...
You can set new clusters, credentials, and contexts for use with kubectl in a default or custom kubeconfig file using kubectl config set-cluster, kubectl config set-credentials, and kubectl config set-context. If you have KUBECONFIG env variable set and pointing to a kubeconfig file, that works or when setting new entries simply pass -kubeconfig to point to a custom file.
Here's the relevant API documentation for v1.6.
We created helmsman which provides you with declarative syntax to manage helm charts in your cluster. It configures kubectl (and therefore helm) for you wherever you run it. It can also be used from a docker container.

Resources