script backup namespace,deployment etc.. from kubernetes - bash

I'm looking for a bash script that could backup all the kubernetes in a yaml format or json it's good too:)
I already backup the kubernetes conf files already.
/etc/kubernetes
/etc/systemd/system/system/kubelet.service.d
etc...
Now I'm just looking to save the
namespaces
deployment
etc...

You can dump your entire cluster info into one file using:
kubectl cluster-info dump > cluster_dump.txt
The above command will dump all the yaml and container logs into one file
Or if you just want yaml files, you can write a script of some commands which includes
kubectl get deployment -o yaml > deployment.yaml
kubectl get statefulset -o yaml > statefulset.yaml
kubectl get daemonset -o yaml > daemonset.yaml
Then you have to keep the namespace also in mind while creating the script. This gives you fair idea what to do

try the below command. you can include all the namespaces that you want to backup
mkdir -p /root/k8s-backup
kubectl cluster-info dump --namespaces default,kube-system --output-directory=/root/k8s-backup

Related

kubernetes redis-cluster pod by pod login using shell script?

Currently, I am trying to prepare a shell script for redis-cluster backup of each pod running in a particular namespace.
I wanted to achieve it by performing two operations:
login to each pod one by one and then connect to redis-cli and execute BGSAVE command.
take a copy of dump.rdb file from each pod and place it at backup folder.
for first part, I have prepared following code snippet:
#!/bin/bash
NS=$1
for i in `cat "$NS"_POD_LIST`;
do
echo "POD: $i";
kubectl exec -it $i -n $NS -- bash -c "redis-cli -c BGSAVE"
##After BGSAVE wanted to get out of redis-cli, but it get stuck here, unable to switch to other pod
done
now on the second part wanted to copy dump.rdb file from each pod to the destination folder, but this execution will be outside of pod..
SOURCE_DIR="/redis/data"
BACKUP_DIR="/pod/backup"
for i in `cat "$NS"_POD_LIST`;
do
echo "POD: $i";
kubectl cp $NS/$i:$SOURCE_DIR $BACKUP_DIR
done
Please let me know on this code snippet how to achieve this?

Executing a kubernetes pod with the use of pod name

I am writing a shell script for executing a pod for which the syntax is:
winpty kubectl --kubeconfig="C:\kubeconfig" -n namespace exec -it podname bash
This works fine but since podname is not stable and changes for every deployment so is there any alternative for this?
Thanks.
You can use normally $ kubectl exec command but define value for changing pod name.
Assuming that you have deployment and labeled pods: app=example, simply execute:
$ kubectl exec -it $(kubectl get pods -l app=example -o custom-columns=:metadata.name) -- bash
EDIT:
You can also execute:
POD_NAME = $(kubectl get pods -l app=example -o custom-columns=":metadata.name")
or
POD_NAME = $(kubectl get pods -l app=example -o jsonpath = "{. Items [0] .metadata.name}")
finally
$ winpty kubectl exec -ti $POD_NAME --bash
Make sure that you execute command in proper namespace - you can also add -n flag and define it.
You can use the following command:
kubectl -n <namespace> exec -it deploy/<deployment-name> -- bash
Add a service to your application:
As you know, pods are ephemeral; They come in and out of existence dynamically to ensure your application stays in compliance with your configuration. This behavior implements the scaling and self-healing aspects of kubernetes.
You application will consist of one or more pods that are accessible through a service , The application's service name and address does not change and so acts as the stable interface to reach your application.
This method works both if your application has one pod or many pods.
Does that help?

Grep command to extract a kubernetes pod name

I am currently working on a bash script to reduce the time it takes for me to build the db for a project.
Currently I have several databases running in the same namespace and I want to extract only the specific pod name.
I run kubectl get pods
NAME READY STATUS RESTARTS AGE
elastic 1/1 Running 0 37h
mysql 1/1 Running 0 37h
Now I want to save one of the pod names.
I'm currently running this foo=$(kubectl get pods | grep -e "mysql")
and it returns mysql 1/1 Running 0 37h which is the expected results of the command. Now I just want to be able to extract the pod name as that variable so that I can pass it on at a later time.
This should work for you
foo=$(kubectl get pods | awk '{print $1}' | grep -e "mysql")
kubectl already allows you to extract only the names:
kubectl get pods -o=jsonpath='{range .items..metadata}{.name}{"\n"}{end}' | fgrep mysql
If I'm not mistaken, you merely needs to get only pod names to reuse these later.
The kubectl get --help provides a lot of good information on what you can achieve with just a kubectl without invoking the rest of the heavy artillery like awk, sed, etc.
List a single pod in JSON output format.
kubectl get -o json pod web-pod-13je7
List resource information in custom columns.
kubectl get pod test-pod -o
custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image
Return only the phase value of the specified pod.
kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}
In this particular case I see at least 2 workarounds :
1) Custom columns. You can get virtually any output (and then you can grep/tr/awk if needed):
$ kubectl get pods --no-headers=true -o custom-columns=NAME_OF_MY_POD:.metadata.name
mmmysql-6fff9ffdbb-58x4b
mmmysql-6fff9ffdbb-72fj8
mmmysql-6fff9ffdbb-p76hx
mysql-tier2-86dbb787d9-r98qw
nginx-65f88748fd-s8mgc
2) jsonpath (the one #vault provided):
kubectl get pods -o=jsonpath='{.items..metadata.name}'
Hope that sheds light on options you have to choose from.
Let us know if that helps.
kubectl get pods | grep YOUR_POD_STARTING_NAME
For Example
kubectl get pods | grep mysql

Logstash cannot start because of multiple instances even though there are no instances of it running

I keep getting this error [2019-02-26T16:50:41,329][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
when I launch logstash. I am using the cli to launch logstash. The command that I execute is:
screen -d -S logstash -m bash -c "cd;export JAVA_HOME=/nastools/jdk1.8.0_77/; export LS_JAVA_OPTS=-Djava.net.preferIPv4Stack=true; ~/monitoring/6.2.3/bin/logstash-6.2.3/bin/logstash -f ~/monitoring/6.2.3/config/logstash_forwarder/forwarder.conf"
I don't have any instance of logstash running. I tried running this:
ps xt | grep "logstash" and it didn't return any process. I tried the following as well: killall logstash but to no avail, it gives me the same error. I tried restarting my machine as well but still the same error.
Has anyone experienced something similar? Kibana and elastic search launch just fine.
Thanks in advance for your help!
The problem is solved now. I had to empty the contents of the data directory of logstash. I then restarted it and it generated the uuid and other files it needed.
To be more specific, you need to cd to the data folder of logstash (usually it is /usr/share/logstash/data) and delete the .lock file.
You can see if this file exists with:
ll -lah
In the data folder.
Learn it from http://www.programmersought.com/article/2009814657/;jsessionid=282FF6001AFE90D7D8609975B8222CE8
sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ --path.data sensor39 -f /etc/logstash/conf.d/company_dump.conf --config.reload.automatic
Try this cmd I hope it will work(but please check the .conf file path)

Dockerize ruby script that takes directories as input/output

I am very new to docker, and I need help to dockerize a ruby script that takes a a input directory and output directory.
i.e generate_rr_pair.rb BuildRR -n /data/ -o /output
What the script does, is it will take the -n option (input) and check if the directory exists, if it does it uses the files inside as input. The script will then output data to the -o option (output). If the output directory doesn't exist, the script will create the directory and output files there.
How can I create a Dockerfile to handle this? Should I pass these in, as environment variables? Or should I use mounted Volumes? But since the script handles fileIO, I am not sure if I want volumes. The input directory should already exist on the host, and the output directory will get created. Both directories, should remain after docker container stops.
Use the official ruby image in your docker file:
FROM ruby:2.1-onbuild
CMD ["ruby", "generate_rr_pair.rb"]
Building the container as normal
docker build -t myruby .
Which can then be run as follows:
docker run --rm -it -v /data:/data -v /output:/output myruby BuildRR -n /data -o /output
Note that volume mappings are required if you want the ruby script within the container to operate on directories mounted on the host machine.

Resources