I have pods in my open shift and want to work on multiple open shift applications. Lets say like below
sh-4.2$ oc get pods
NAME READY STATUS RESTARTS AGE
jenkins-7fb689fc66-fs2xb 1/1 Running 0 4d
jenkins-disk-check-1587834000 0/1 Completed 0 21h
NAME READY STATUS RESTARTS AGE
jenkins-7fb689fc66-gsz9j 0/1 Running 735 9d
jenkins-disk-check-1587834000
NAME READY STATUS RESTARTS AGE
jenkins-9euygc66-gsz9j 0/1 Running 735 9d
I have tried with below command
oc get pods
export POD=$(oc get pods | awk '{print $1}' | grep jenkins*)
I want to find the pods starting with numbers "jenkins-7fb689fc66-fs2xb",jenkins-9euygc66-gsz9j, etc... using scripting and need to ignore disk check pods. If i catch the above pods and need to execute the terminal and run some shell commands via programmatically. Can someone help me on this?
kubectl get (and by extension oc get) is a very versatile tool. Unfortunately, after looking around online for awhile, you will definitely not be able to do Regex without relying on an external tool like awk or grep. (I know this wasn't exactly what you were asking, but I figured I'd at least try to see if it's possible.
With that said, there are a couple of tricks you can rely on to filter your oc get output before you even have to pull in external tools (bonus points because this filtering occurs on the server before it even hits your local tools).
I first recommend running oc get pods --show-labels, because if the pods you need are appropriately labeled, you can use a label selector to get just the pods you want, e.g.:
oc get pods --selector name=jenkins
oc get pods --selector <label_key>=<label_value>
Second, if you only care about the Running pods (since the disk-check pods look like they're already Completed), you can use a field selector, e.g.:
oc get pods --field-selector status.phase=Running
oc get pods --field-selector <json_path>=<json_value>
Finally, if there's a specific value that you're after, you can pull that value into the CLI by specifying custom columns, and then greping on the value you care about, e.g.:
oc get pods -o custom-columns=NAME:.metadata.name,TYPES:.status.conditions[*].type | grep "Ready"
The best thing is, if you rely on the label selector and/or field selector, the filtering occurs server side to cut down on the data that ends up making it to your final custom columns, making everything that much more efficient.
For your specific use case, it appears that simply using the --field-selector would be enough, since the disk-check pods are already Completed. So, without further information on exactly how the Jenkins pod's JSON is constructed, this should be good enough for you:
oc get pods --field-selector status.phase=Running
Assuming that you need to print jenkins id in first field, could you please try following.
awk 'match($0,/jenkins[^ ]*/){print substr($0,RSTART,RLENGTH)}' Input_file
Explanation: Adding explanation for above code.
awk ' ##Starting awk program from here.
match($0,/jenkins[^ ]*/){ ##Using match function in which mentioning regex jenkins till spacein current line.
print substr($0,RSTART,RLENGTH) ##Printing sub-string in current line where starting point is RSTART till RLENGTH value.
}
' Input_file ##Mentioning Input_file name here.
Adding this answer for others reference.
You could use in this way.
export POD=$(oc get pods | awk '{print $1}' | grep jenkins* | grep -v jenkins-disk-check)
Related
The problem is pretty simple, but I am struggling to understand where to even look for a solution. I want to iterate over a list of pods I retrieve from openshift and then output some of its properties.
In other words, what I want to do is this:
for node in $(oc get nodes);
do
echo ${node.name}
echo ${node.role}
done
Unfortunately that leads to the error "line 67: ${node.name}: bad substitution".
Just iterating over the nodes with echo node works fine, but it just lists all the properties with one line per property value.
the output of just oc get nodes is:
NAME STATUS ROLES AGE VERSION
avaloq-abcde-master-0-xyz Ready master 38d v1.xx
avaloq-abcde-master-1-dfs Ready master 38d v1.xx
avaloq-abcde-master-2-gsd Ready master 38d v1.xx
the output of
for node in $(oc get nodes -o name);
do
echo ${node}
done
is
node/avaloq-abcde-master-0-xyz
node/avaloq-abcde-master-1-dfs
node/avaloq-abcde-master-2-gsd
If I try to add another property via -o (the output format?), it throws the following error:
error: unable to match a printer suitable for the output format "roles", allowed formats are: custom-columns,custom-columns-file,go-template,go-template-file,json,jsonpath,jsonpath-as-json,jsonpath-file,name,template,templatefile,wide,yaml
What I expect is the following output
node/avaloq-abcde-master-0-xyz
master
node/avaloq-abcde-master-1-dfs
master
node/avaloq-abcde-master-2-gsd
master
I assume that is where my understanding is lacking. Is the "array of objects/ table" returned via "oc get nodes" not an array of objects but rather just a text list separated by tabs and return characters?
You can use something like this if you only need specific columns from oc get node command
oc get nodes | awk '{print $1, $3}'
which will print output like
avaloq-abcde-master-0-xyz master
avaloq-abcde-master-1-dfs master
avaloq-abcde-master-2-gsd master
I am writing a script that runs each minute and looks for newly created pods. The script executes commands within these pods.
I need to add the missing part to my current solution which looks like this:
while true; do
pods= $(kubectl get pods --field-selector status.phase=='Running' ---------- someting should be here ----------)
for pod in ${pods[#]} ; do
actions
done
sleep 60;
done
My current solution looks like this. Please feel free to review and enhance.
current_pods=$(kubectl get pods | grep "Running" | awk '{print $1}')
# attach existing pods
for pod in $current_pods; do
do-something
done
# attach newly created pods each 60s
while true; do
new_pods=$(kubectl get pods | grep "Running" | awk '{print $1}')
for pod in $new_pods; do
if [[ ! " ${current_pods[#]} " =~ $pod ]]; then
do-something
fi
done
current_pods=$new_pods
sleep 60;
done
I'd suggest to make use of Kubernetes watch and/or events APIs.
Concrete implementation depends on what events you are actually interested in.
What do you mean by "newly created pods"?
This command will stream you the names of new pods entering the Running phase.
# --watch-only can be replaced by --watch
# --watch will show all current pods in Running state + stream changes
# --watch-only will just stream changes
kubectl get pods --field-selector status.phase=='Running' \
--watch-only -ojsonpath='{.involvedObject.name}{"\n"}'
But this might be not what you want.
"Running" phase refers to a pod state, not to a state of related containers.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase
As a result you can get the following output.
# from API server point of view all these pods are in Running phase,
# regardless of some containers failing
kubectl get pods --field-selector status.phase=Running --watch-only
NAME READY STATUS RESTARTS AGE
pod1 2/2 Running 30 31h
pod2 1/2 Running 30 31h
pod2 1/2 Error 30 31h
pod2 1/2 CrashLoopBackOff 30 30h
This command will stream you the names of newly scheduled pods (i. e. scheduling decision has been dispatched to a kubelet, but pod's containers may not be running yet).
This can be viewed as a moment of actual pod object creation, but doesn't say anything about whether the pod conainers have succesfully started.
kubectl get events --field-selector involvedObject.kind=Pod,reason=Scheduled \
--watch-only -ojsonpath='{.involvedObject.name}{"\n"}'
This command will stream you the names of new pods which have started successfully.
Technically it means that certain conditions are met, like "all containers in the Pod are ready" and "the Pod is able to serve requests and should be added to the load balancing pools of all matching Services".
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions
# as above: you can replace --watch-only by --watch
# --watch will show the pods that are already running successfully + stream newcomers
# --watch-only will just stream newcomers
kubectl get pods --field-selector status.phase=Running --watch-only -o json | \
jq '. | select (.status.conditions[] | select (.type=="Ready" and .status=="True")) | .metadata.name'
Using these examples you could construct necessary conditions according to your needs and run a simple command like this.
# kubectl watch will timeout in a period of time
# timeout value is configured by API server param
# --min-request-timeout [default=1800s]
# also if API server restarts, the connection will be dropped
# so we still need 'while true' here
while true; do
kubectl get events --field-selector involvedObject.kind=Pod,reason=Scheduled \
--watch-only -ojsonpath='{.involvedObject.name}{"\n"}' | \
while read pod; do
echo "$pod"
done
done
Frankly this approach doesn't guarantee you 100% consistency. You may miss some events with --watch-only (say, when the client restarts), as well as you may have duplicates with --watch.
For simple usecases keeping a short-lived vocabulary of proccessed pods is probably fine.
For more sophisticated tasks you could turn to advanced usage of watch APIs.
There is a good article to start with.
https://learnk8s.io/real-time-dashboard
For a gitlab ci/cd project, I need to find the url of a knative service (used to deploy a webservice) so that I can utilize it as my base url for load testing
I have found that I can find the url (and other information) with the command: kubectl get ksvc helloworld-go, which outputs:
NAME URL LATESTCREATED LATESTREADY READY REASON
helloworld-go http://helloworld-go.default.34.83.80.117.xip.io helloworld-go-96dtk helloworld-go-96dtk True
Can someone please provide me an easy way to extract only the url in a sh script? I believe the easiest way might be to find the text between the first and second space on the second line.
kubectl get ksvc helloworld-go | grep -oP "http://[^\t]*"
or
kubectl get ksvc helloworld-go | grep -Eo "http://[^[:space:]]*"
When a disk inserted to my cluster, i wanna know that.
So i need to listen /var/adm/messages and when i catch !NEW! "online" line i must write it to a different log file.
When disk goes online I get this kind of log entries:
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Tail works without -F option. But i need -F option :/
tail messages | grep 408114 | grep '/scsi_vhci/disk#'| egrep -wi --color 'online'
I have 3 uniform words for grep.
1- The id "408114" is unique for online status.
2- /scsi_vhci/disk#
3- online
P.S: Sorry for my english :)
For grep AND use .*:
$ grep 408114.*/scsi_vhci/disk#.*online test
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Next time don't edit the question completely but ask another question.
On the Parse.com cloud-code console, I can see logs, but they only go back maybe 100-200 lines. Is there a way to see or download older logs?
I've searched their website & googled, and don't see anything.
Using the parse command-line tool, you can retrieve an arbitrary number of log lines:
Usage:
parse logs [flags]
Aliases:
logs, log
Flags:
-f, --follow=false: Emulates tail -f and streams new messages from the server
-l, --level="INFO": The log level to restrict to. Can be 'INFO' or 'ERROR'.
-n, --num=10: The number of the messages to display
Not sure if there is a limit, but I've been able to fetch 5000 lines of log with this command:
parse logs prod -n 5000
To add on to Pascal Bourque's answer, you may also wish to filter the logs by a given range of dates. To achieve this, I used the following:
parse logs -n 5000 | sed -n '/2016-01-10/, /2016-01-15/p' > filteredLog.txt
This will get up to 5000 logs, use the sed command to keep all of the logs which are between 2016-01-10 and 2016-01-15, and store the results in filteredLog.txt.