My helm install was giving some errors and I used this command to try to fix it. Now my cluster is gone
kubectl config view --raw >~/.kube/config
The error is:
localhost:8080 was refused - did you specify the right host or port?
You can't reverse it because you have basically overwritten the existing file, the only way to get it back is to restore from backups.
See this answer for more details on why it happened: https://stackoverflow.com/a/6696881/3066081
It boils down to the way bash handles shell redirects - > overwrites file before kubectl has a chance to access it, and after seeing an empty file, it creates the following yaml:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
localhost:8080 is a default address kubectl tries to connect to if kubeconfig is empty.
As for proper way of doing that in future you should:
Backup your current ~/.kube/config to ~/.kube/config.bak, just in case
cp ~/.kube/config{,.bak}
Create a new, temporary file, containing output of your command, i.e.
kubectl config view --raw > raw_config
Replace kubeconfig with new one
mv raw_config ~/.kube/config
Related
I want to know if there is a way to delete specific resources in kubernetes across all namespaces? I want to delete all the services as type of LoadBalancer at once, and have this automated in a pipeline. I already built a xargs kubectl command which will get the name of LoadBalancer services across all namespaces and feed the output to the kubectl delete command. I just need to loop across all namespaces now.
kubectl get service -A -o json | jq '.items[] | select (.spec.type=="LoadBalancer")' | jq '.metadata.name' | xargs kubectl delete services --all-namespaces
error: a resource cannot be retrieved by name across all namespaces
If I remove the --all-namespaces flag and run --dry-run=client flag instead, then I get a dry-run deletion of all the services I want getting deleted on all namespaces. Is there a way k8s lets you delete resources by name across all namespaces?
Any ideas?
UPDATE:
This is the output of running the command using --dry-run flag, it gets the name of all the services I want to delete and automatically feeds them to the kubectl delete command
kubectl get service -A -o json | jq '.items[] | select (.spec.type=="LoadBalancer")' | jq '.metadata.name' | xargs kubectl delete services --dry-run=client
service "foo-example-service-1" deleted (dry run)
service "bar-example-service-2" deleted (dry run)
service "baz-example-service-3" deleted (dry run)
service "nlb-sample-service" deleted (dry run)
The only part missing is that I need to do the deletion across all namespaces to delete all the specified services, I only want to delete services of type LoadBalancer and not other types of services like ClusterIP or NodePort or anything so the specific names must be provided.
You can achieve that using jsonpath with the following command:
kubectl delete services $(kubectl get services --all-namespaces \
-o jsonpath='{range .items[?(#.spec.type=="LoadBalancer")]}{.metadata.name}{" -n "}{.metadata.namespace}{"\n"}{end}')
The output of the kubectl get command gives a list of <service_name> -n <namespace_name> strings that are used by the kubectl delete command to delete services in the specified namespaces.
currently busy learning kubernetes and running configs on the command line, and I'm using an M1 MacOS running on version 11.5.1, and one of the commands I wanted to run is curl "http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy" but I get the below error message curl: (3) URL using bad/illegal format or missing URL. Not sure if anyone has experienced this issue before, would appreciate the help.
First, curl command should receive only 1 host, not multiple hosts.
Therefore pod should be single.
Then, you need to save POD's name to a variable without any special characters.
Last, when you're using kubectl proxy, you need to add -L option to the curl command so it will follow the redirection.
Simple example will be:
# run pod with echo image
kubectl run echo --image=mendhak/http-https-echo
# start proxy
kubectl proxy
# export pod's name
export POD_NAME=echo
# curl with `-I` - headers and `-L` - follow redirects
curl -IL http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy
HTTP/1.1 301 Moved Permanently
Location: /api/v1/namespaces/default/pods/echo/proxy/
HTTP/1.1 200 OK
The problem is that POD_NAME contains two pods separated by space. Therefore the url that you are trying to refer is
malformed - because of space in it - and that's the reason for the error message
wrong - you need to put name of the pod you want to access in there, not both at once
Using CLI arguments to pass secrets is generally found upon: It exposes the secrets to other processes (ps aux) and potentially stores it in the shell history.
Is there a way to create Kubernetes secrets using kubectl that is not exposing the secret as described? I.e. a way to do this interactively?
kubectl create secret generic mysecret --from-literal key=token
You can create from file using e.g.
kubectl create secret generic mysecret --from-file=key=name-of-file.txt
This will prevent the secret text in the commandline, but it does still tell anyone looking through your history where to find the secret text
Also, if you put a space at the start of the line, it does not get added to shell history
kubectl create secret generic mysecret --from-literal....
vs
kubectl create secret generic mysecret --from-literal....
(with space at the start)
I'm new to open distro for elasticsearch and trying to run it on the Kubernetes cluster. After deploying the cluster, I need to change the password for admin user.
I went through this post - default-password-reset
I came to know that, to change the password I need to do the following steps:
exec in one of the master nodes
generate a hash for the new password using /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh script
update /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml with the new hash
run /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh with parameters
Questions:
Is there any way to set those (via env or elasticsearch.yml) during bootstrapping the cluster?
I had to recreate internal_users.yml file with the updated password hashes and mounted the file in /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml directory for database pods.
So, when the Elasticsearch nodes bootstrapped, it bootstrapped with the updated password for default users ( i.e. admin ).
I used bcrypt go package to generate password hash.
docker exec -ti ELASTIC_MASTER bash
/usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh
##enter pass
yum install nano
#replace generated hash with new one
nano /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
#exec this command to take place
sh /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert config/root-ca.pem -cert config/admin.pem -key config/admin-key.pem
You can also execute below commands to obtain value of username, password from you kubernetes cluster:
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.username | base64decode}}'
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.password | base64decode}}'
Note: '-n wazuh' indicates the namespace, use what applies to you
Ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
I need to replace some value of configmap.yaml and I have to automate it through shell script. How to do this?
In my case, some line of config.YAML file is
"ABC_KEY" : "*******"
While I am automating the total installation process, it got stuck with an error:
error: error parsing yamlfiles/configMap.yaml: error converting YAML to JSON: yaml: line 22: did not find the expected key"
What I want while I run the shell script to automate the installation process and it applies the config.YAML file, is that it asks me some value of the specific key and I will manually enter the value. Then it should run and complete the automation of installation process
Input YAML file:
SECRET_KEY: "******""
JWT_KEY: "********"
SFTP_PWD: "*********"
In the shell script I am executing this command:
kubectl apply -f yamlfiles/configMap.yaml
After running the kubectl command, I want it to ask me the value of those keys so that I can manually enter the value then the apply of configmap will be fulfilled. Is it possible?