I'm running kubernetes in azure.
I want to delete a specific deployment, with AZ AKS or kubectl.
The only info I've found is how to delete pods, but this is not what I'm looking for, since pods will regenerate once deleted.
I know I just can go to the ui and delete the deployment but i want to do it with az aks or kubectl.
I've run
kubectl get all -A
Then I copy the name of the deployment that I want to delete and run:
kubectl delete deployment zr-binanceloggercandlestick-btcusdt-2hour
kubectl delete deployment deployment.apps/zr-binanceloggercandlestick-btcusdt-12hour
but noting no success, i get these errors:
Error from server (NotFound): deployments.extensions "zr-binanceloggercandlestick-btcusdt-2hour" not found
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource/<resource_name>' instead of 'C:\Users\amnesia\.azure-kubectl\kubectl.exe get resource resource/<resource_name>'
Find out all deployments across all namespaces
kubectl get deploy -A
Then delete a deployment with deploymentname from namespace. deploymentname can be found from above command.
kubectl delete deploy deploymentname -n namespacename
Docs on how to configure kubectl to connect to AKS.
Use the below command.
kubectl delete deployment deployment-name-here
More about the command here.
Related
Kubeflow pipeline runs generate artifacts, which from what i can see in the dashboard are saved on minio. How can I delete them?
Turns out it can be done through minio UI. To access minio remotly use configured kubectl:
kubectl port-forward -n kubeflow svc/minio-service 9000:9000
And then in web browser go to localhost:9000.
Also each bucket can be assigned a lifecycle rule, which will give objects added under some prefix expiration date
https://docs.min.io/docs/python-client-api-reference.html.
Helm install called from the cloud shell worked last week but now regardless of using bash or powershell it returns
Error: could not find tiller
Last week I was able to create an ingress controller by following the Microsoft article Create an HTTPS ingress controller on Azure Kubernetes Service (AKS)
now when I get to the helm install step I get the error indicated in the title. To recreate this issue:
Do clouddrive unmount from within the powershell cloud shell.
Using the Azure portal delete your cloudshell file share in Azure Storage.
create a 2 node 'B2s' Kubernetes Service using Advanced networking using the provided defaults.
Open the Cloud Shell using either bash or Powershell.
Do az aks get-credentials and provide the name of your AKS cluster.
Do kubectl create namespace ingress-basic
Do helm repo add stable https://kubernetes-charts.storage.googleapis.com/
the command above will warn that you need to perform a helm init.
Do a az aks list to get the servicePrincipalProfile:clientId of your AKS cluster
Do helm init --service-account using the clientId of the command above for the service-account parameter
Do the helm install using the parameters from the Microsoft Docs Create an HTTPS ingress controller on Azure Kubernetes Service (AKS)
At this point you should get the error mentioned in the title.
Any suggestions on what I might be missing.
right, I think you need to actually create a service account in kubernetes in order for this to work, sample code:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
# Users in China: You will need to specify a specific tiller-image in order to initialize tiller.
# The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085.
# When initializing tiller, you'll need to pass in --tiller-image
helm init --service-account tiller \
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:<tag>
https://rancher.com/docs/rancher/v2.x/en/installation/options/helm2/helm-init/
whereas you are trying to use Azure Service Principal instead of Kubernetes Service Account. They are not the same thing.
OK two items.
Tiller was not installed. on 2/14/2020 I was able to install an ingress controller using helm. But on 2/18 I was not able to do a helm install --dry-run on a chart because of the above mentioned error. Anyway after following this documentation Install applications with Helm in Azure Kubernetes Service (AKS) I got Tiller installed properly.
After getting Tiller properly installed the helm install command for the ingress controller Create an HTTPS ingress controller on Azure Kubernetes Service (AKS) was failing with chart not found error.
I modified the command from
helm install nginx stable/nginx-ingress \
to
helm install stable/nginx-ingress \
and the chart is now deploying properly.
I am now back to where I was on 2/14. Thanks everyone for the help.
OK got the following letter from Microsoft Tech Support letting me know that there was a problem with Azure Cloud Shell. I did a Helm version this morning and now see
version.BuildInfo{Version:"v3.1.1",
GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8",
GitTreeState:"clean", GoVersion:"go1.13.8"}
Which I did not see on 2/14. So I looks like I was NOT crazy and that Microsoft has fixed this issue. Based on her letter the issue with the chart name should also be resolved.
Hi Brian, Thanks for your update. We are so sorry for the
inconvenience. There was an error in the script which builds the cloud
shell image. Helm released 2.16.3 more recently than 3.1 and the build
script picked that up as the 'latest' release, causing the inadvertent
downgrade. As helm v3 does not require tiller pod, so the tiller pod
cannot be found when using helm v2. The helm version will be
re-upgraded in the next release. As confirmed by you, the queries
related to this issue have been solved and hence I will go ahead and
archive your case at this time. Please remember that support for this
case does not end here. Should you need further assistance with your
issue, you can reach out to me and I will continue working with you.
Here is a summary of the key points of the case for your records.
Issue Definition: Helm install Error: could not find tiller.
Resolution/Suggestion Summary: You could refer to the follow the below document to configure helm V2:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-helm#install-an-application-with-helm-v2.
Alternatively, you may also use the command below to upgrade the helm
to version 3 as a workaround: curl https://aka.ms/cloud-shell-helm3 |
bash. The command ‘helm version’ can be used to verify the version of
Helm you have installed.
Besides, the syntax for helm 3 is different from helm 2. The syntax
for helm v3 is ‘helm install [NAME] [CHART] [flags]’ while for hem v2
is ‘helm install [CHART] [flags]’. So you need to remove the word
nginx after the word install if you are using helm v2. You could refer
to the following document for more information:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-helm#install-an-application-with-helm-v2
It was my pleasure to work with you on this service request. Please do
not hesitate to contact me if I can be of further assistance. Thank
you very much for your support of Microsoft Azure. Have a nice day😊!
Good day!
I bumped into this post:
https://cleverbuilder.com/articles/spring-boot-kubernetes/
And It seems I am able to run my spring-boot RESTful application as it is showing in the Minikube Dashboard. The problem is when I try to execute kubectl get pods or kubectl get svc I am not seeing my application. The kubernetes namespace that my application using is test. I'm really puzzled now on how can I access my application. Please shed some light. Thanks!
Run, kubectl get pods -n test, kubectl get svc -n test and this should show you the desired output.
By default, Kubernetes starts with the following three namespaces: Default: Catch-all namespace for all objects not belonging to either of the kube-public or kube-system namespaces. The default namespace is used to hold the default set of pods, services, and deployments used by the cluster. Since, your pod is in a custom namespace test (you created). You'll need to specify the namespace where your deployment is been created or pod is been deployed.
So, kubectl get pods is actually kubectl get pods -n default meaning show pods in default namespace. hence, doing kubectl get pods -n test will show you all your pods in test namespace.
See Documentation
The instructions say to install it, but doesn't suggest which method will work, in what order or with which load balancer. I keep getting useless test certificates installed on my deployments.
It sounds like you are getting the default chart from helm, which is slightly different than the instructions you will find on the kube-lego Github. The helm chart uses Let's Encrypt's staging server by default. Also if you are not familiar with helm it can be quite tricky to undo the remnants that the kube-lego helm chart leaves behind, specially as tiller likes to keep the old installation history.
Here's a brief overview of setting up Gitlab Auto Devops with Google Kubernetes Engine. I should mention that you can skip up to step 13 if you use the Gitlab GKE wizard introduced in 10.1, but it won't really tell you what's happening or how to debug when things go wrong, also the wizard is a one way process.
In Gitlab under integrations you'll find a Kubernetes integration button
You can also find this under CI/ CD -> cluster
The API URL is the endpoint ip prefixed by https://
The "CA Certificate" it asks for is service account CA, not the same as the cluster CA
To connect to gitlab you'll need to create a k8s service account and get the CA and token for it. Do so by using gcloud to authenticate kubectl. GCE makes it easy by doing this for you through the "connect" button in k8s engine
https://kubernetes.io/docs/admin/authentication/#service-account-tokens
All commands must be run with a custom namespace, it will not work with default
kubectl create namespace (NS)
kubectl create serviceaccount (NAME) --namespace=(NS)
This will create two tokens in your configs/secrets, a default one and you service account one
kubectl get -o json serviceaccounts (NAME) --namespace=(NS)
kubectl get -o json secret (secrets-name-on-prev-result) --namespace=(NS)
To decode the base64 values echo them to base64 -d, for example
Echo mybase64stringasdfasdf= | base64 -d
To install helm, use the installation script method
https://docs.helm.sh/using_helm/#from-script
Init helm and update it's repo
helm init
helm repo update
Install nginx-ingress, its ingress with nginx. You could use the Google load balancer as well, but it's not as portable.
helm install stable/nginx-ingress
Make a wild card subdomain with an A record pointing to the IP address set up by ingress
In Gitlab, Make auto devops use your newly setup wildcard subdomain, if its "*.x" on "me.com" you should enter "x.me.com" in the auto devops settings.
Now install kube lego
helm install --name lego-main \
--set config.LEGO_EMAIL=CHANGEMENOW#example.com \
--set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory \
stable/kube-lego
Helm installations are called charts, equal to a k8s container installation
If you wish to delete a release, you must purge it, otherwise tiller will keep a history, with:
helm delete --purge my-release-name
You can find the release names and their associated chart in
helm list
Troubleshooting
Order doesn't seem to matter too much. attaching to a pod can be a useful way of debugging problems, such as a bad email address. The Ideal order however is probably, nginx-ingress, then kube-lego, then gitlab. I did make it work with gitlab first, then nginx-ingress, then kube-lego.
I heard from Sid that they are working to make this easier... let's hope so.
Where can I find an official template which describes how to create your .yaml file to setup services/pods in Kubernetes?
You can find the specification for a pod here http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_pod
A good starting point is also the examples https://github.com/kubernetes/kubernetes/tree/master/examples
Additional you can create the ressource via kubectl and export the resource to yaml
e.g. for a pod you can run this command:
kubectl run nginx --image=nginx
kubectl get pods ngnix -o yaml > pod.yaml