How to delete kubernetes resources across all namespaces? - bash

I want to know if there is a way to delete specific resources in kubernetes across all namespaces? I want to delete all the services as type of LoadBalancer at once, and have this automated in a pipeline. I already built a xargs kubectl command which will get the name of LoadBalancer services across all namespaces and feed the output to the kubectl delete command. I just need to loop across all namespaces now.
kubectl get service -A -o json | jq '.items[] | select (.spec.type=="LoadBalancer")' | jq '.metadata.name' | xargs kubectl delete services --all-namespaces
error: a resource cannot be retrieved by name across all namespaces
If I remove the --all-namespaces flag and run --dry-run=client flag instead, then I get a dry-run deletion of all the services I want getting deleted on all namespaces. Is there a way k8s lets you delete resources by name across all namespaces?
Any ideas?
UPDATE:
This is the output of running the command using --dry-run flag, it gets the name of all the services I want to delete and automatically feeds them to the kubectl delete command
kubectl get service -A -o json | jq '.items[] | select (.spec.type=="LoadBalancer")' | jq '.metadata.name' | xargs kubectl delete services --dry-run=client
service "foo-example-service-1" deleted (dry run)
service "bar-example-service-2" deleted (dry run)
service "baz-example-service-3" deleted (dry run)
service "nlb-sample-service" deleted (dry run)
The only part missing is that I need to do the deletion across all namespaces to delete all the specified services, I only want to delete services of type LoadBalancer and not other types of services like ClusterIP or NodePort or anything so the specific names must be provided.

You can achieve that using jsonpath with the following command:
kubectl delete services $(kubectl get services --all-namespaces \
-o jsonpath='{range .items[?(#.spec.type=="LoadBalancer")]}{.metadata.name}{" -n "}{.metadata.namespace}{"\n"}{end}')
The output of the kubectl get command gives a list of <service_name> -n <namespace_name> strings that are used by the kubectl delete command to delete services in the specified namespaces.

Related

how to reverse the effect of this command. kubectl config view

My helm install was giving some errors and I used this command to try to fix it. Now my cluster is gone
kubectl config view --raw >~/.kube/config
The error is:
localhost:8080 was refused - did you specify the right host or port?
You can't reverse it because you have basically overwritten the existing file, the only way to get it back is to restore from backups.
See this answer for more details on why it happened: https://stackoverflow.com/a/6696881/3066081
It boils down to the way bash handles shell redirects - > overwrites file before kubectl has a chance to access it, and after seeing an empty file, it creates the following yaml:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
localhost:8080 is a default address kubectl tries to connect to if kubeconfig is empty.
As for proper way of doing that in future you should:
Backup your current ~/.kube/config to ~/.kube/config.bak, just in case
cp ~/.kube/config{,.bak}
Create a new, temporary file, containing output of your command, i.e.
kubectl config view --raw > raw_config
Replace kubeconfig with new one
mv raw_config ~/.kube/config

How to get only the "project name" from GCP using gcloud command?

I am trying to get the project name of my GCP project.
Firstly, I tried using the command:
gcloud projects describe $PROJECT_ID
Here you get the poject Id, number name, Organization and other details.
Then next use grep command to get the project Name.
It's often more convenient to use gcloud only:
PROJECT=[[YOUR-PROJECT]]
gcloud projects describe ${PROJECT} \
--format="value(name)"
You may use e.g. value(projectNumber) to get the project number.
To get the project name of your GCP project, use the below command:
gcloud projects describe $PROJECT_ID | grep name | awk -F'name: ' '{print $2}'
where, $PROJECT_ID is the id of your GCP project

EC2 instance region is not populated in user-data script

I want to fill some tags of the EC2 spot instance, however as it is impossible to do it directly in spot request, I do it via user data script. All is going fine when I specify region statically, but it is not universal approach. When I try to detect current region from instance userdata, the region variable is always empty. I do it in a following way:
#!/bin/bash
region=$(ec2-metadata -z | awk '{print $2}' | sed 's/[a-z]$//')
aws ec2 create-tags \
--region $region \
--resources `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` \
--tags Key=sometag,Value=somevalue Key=sometag,Value=somevalue
I tried to made a delay before region populating
/bin/sleep 30
but this had no result.
However, when I run this script manually after start, the tags are added fine. What is going on?
Why all in all aws-cli doesn't get default region from profile? I have aws configure properly configured inside the instance, but without --region clause it throws error that region is not specified.
I suspect the ec2-metadata command is not available when your userdata script is executed. Try getting the region from the metadata server directly (which is what ec2-metadata does anyway)
region=$(curl -fsq http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
AWS CLI does use the region from default profile.
You can now use this endpoint to get only the instance region (no parsing needed):
http://169.254.169.254/latest/meta-data/placement/region
So in this case:
region=`curl -s http://169.254.169.254/latest/meta-data/placement/region`
I ended up with
region=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | python -c "import json,sys; print"
which worked fine. However, it would be fine if somebody explain the nuts-and-bolts.

gcloud beta logging for trailing logs

I just found out about google new "gcloud beta logging" service.
The classic sample they show is something like this:
gcloud beta logging write my-test-log "A simple entry"
But I would like to log every new entry in a specific log file. For example:
tail -F My_Log_File.txt | grep gcloud beta logging write my-test-log
What is the best practice for this operation?
You can do this:
tail -F My_Log_File.txt | xargs gcloud beta logging write my-test-log
Or you can use a logging agent to watch certain files and log them to the logging service:
https://cloud.google.com/logging/docs/agent/installation
http://docs.fluentd.org/articles/in_tail

How to properly configure the Amazon EC2 AMI for 'hadoop-ec2'?

I am trying to launch an instance on Amazon EC2. I have researched this problem extensively, but I have not found any helpful information.
When I run the command hadoop-ec2 launch-cluster mycluster 2, I receive the following error message:
Starting master with AMI.
Required parameter 'AMI' missing (-h for usage)
I have entered my AWS key, AWS secret key, AWS key pairs, etc. I am using hadoop-1.0.4. I am using the default S3 bucket (hadoop-images), but I have tried many other AMIs and I always get the same error message.
Has anybody experience this problem before?
The basic issue is that the search for images the launch-hadoop-master script performs is not returning any results. The most likely cause of this due to the different AMIs that are available in different regions (but it could be due to any changes you've made to S3_BUCKET and HADOOP_VERSION in hadoop-ec2-env.sh).
From the launch-hadoop-master script:
# Finding Hadoop image
AMI_IMAGE=`ec2-describe-images -a | grep $S3_BUCKET
| grep $HADOOP_VERSION
| grep $ARCH
| grep available
| awk '{print $2}'`
# Start a master
echo "Starting master with AMI $AMI_IMAGE"
So, it appears that AMI_IMAGE is not being set to a valid image and thus the search for AMIs that match the various grep filters is failing (the defaults for the Hadoop 1.0.4 distribution are S3_BUCKET is hadoop-images, HADOOP_VERSION is 0.19.0 and ARCH is x86 if you're using m1.small instances). If you search the public AMIs in the US-West-2 region, you'll see that there aren't many Hadoop images, but if you search the public AMIs in the US-East-1 region, you'll see that there are quite a few. Thus, one way around this issue is to work in the US-East-1 region (this is simplest) or, alternatively, set EC2_URL in your login script via export EC2_URL=https://ec2.us-east-1.amazonaws.com but now you need make sure you put your keys in this region from the AWS console.
If you did indeed change HADOOP_VERSION to 1.0.4, I'll note that
ec2-describe-images -a | grep hadoop-images
| grep "1.0.4"'
| grep x86
| grep available
doesn't return any images in the US-East-1 region. Note that the version (HADOOP_VERSION) of the Hadoop distribution that you are running the hadoop-ec2 command from does not need to be the same as the version of Hadoop that the images will be running.
Lastly, as a blunt fix, you could find the AMI that you want to use, and force set AMI_IMAGE to the image name in the launch-hadoop-master and launch-hadoop-cluster scripts.

Resources