How to parse a string of a kubectl cmd output in a shell script? - bash

kubectl get nodes -o name gives me the output
node/k8s-control.anything
node/k8s-worker1.anything
I need to get only
control
worker1
as output and want to iterate through these elements
for elm in $(kubectl get nodes -o name); do echo "$elm" >> file.txt; done
So the question is how to get the string between node/k8s- and .anything and iterate these in the for loop.

You can for example use cut twice, first to get a part after - and
then to get a part before .:
for elm in $(kubectl get nodes -o name | cut -d- -f2 | cut -d. -f1); do echo "$elm" >> file.txt; done

With awk
kubectl get nodes -o name | awk -F'[.-]' '{print $2}' > file.txt

You can use grep with -oP filter to extract the desired substring. Later, you can use > operator to redirect to the file.txt.
kubectl get nodes -o name|grep -oP 'node.*?-\K[^.]+'
control
worker1

Another option might be bash parameter expansion:
while read -r line ; do line="${line#*-}"; line="${line%.*}"; printf "%s\n" "$line" ; done < <(kubectl get nodes -o name)

Related

User input into variables and grep a file for pattern

H!
So I am trying to run a script which looks for a string pattern.
For example, from a file I want to find 2 words, located separately
"I like toast, toast is amazing. Bread is just toast before it was toasted."
I want to invoke it from the command line using something like this:
./myscript.sh myfile.txt "toast bread"
My code so far:
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w "$keyword_first""$keyword_second" )
echo $find_keyword
i have tried a few different ways. Directly from the command line I can make it run using:
cat myfile.txt | grep -E 'toast|bread'
I'm trying to put the user input into variables and use the variables to grep the file
You seem to be looking simply for
grep -E "$2|$3" "$1"
What works on the command line will also work in a script, though you will need to switch to double quotes for the shell to replace variables inside the quotes.
In this case, the -E option can be replaced with multiple -e options, too.
grep -e "$2" -e "$3" "$1"
You can pipe to grep twice:
find_keyword=$(cat $text_file | grep -w "$keyword_first" | grep -w "$keyword_second")
Note that your search word "bread" is not found because the string contains the uppercase "Bread". If you want to find the words regardless of this, you should use the case-insensitive option -i for grep:
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
In a full script:
#!/bin/bash
#
# usage: ./myscript.sh myfile.txt "toast" "bread"
text_file=$1
keyword_first=$2
keyword_second=$3
find_keyword=$(cat $text_file | grep -w -i "$keyword_first" | grep -w -i "$keyword_second")
echo $find_keyword

grep from two variables

I am trying to eliminate the duplicate lines of a list like this.
LINES='opa
opa
eita
eita
argh'
DUPLICATE='opa
eita'
The output I am looking for is argh.
Till now, this is what I tried:
echo -e "$DUPLICATE" | grep --invert-match -Ff- <(echo -e "$LINES")
And:
grep --invert-match -Ff- <(echo -e "$DUPLICATE") <(echo -e "$LINES")
But unsuccessfuly.
I know that I can achieve this if I put the content of $LINES into a file:
echo -e "$DUPLICATE" | grep --invert-match -Ff- FILE
But I'd like to know if this is possible only with variables.
Passing a dash as the file name to -f means "read from stdin". Get rid of it so the file name given to -f is the process substitution.
There's no need for echo -e, and -v is shorter and more common than --invert-match.
echo "$LINES" | grep -vFf <(echo "$DUPLICATE")
Equivalently, using a herestring:
grep -vFf <(echo "$DUPLICATE") <<< "$LINES"
another approach which doesn't require to create a duplicate list separately,
$ awk '{a[$0]++} END{for(k in a) if(a[k]==1) print k}' <<< "$LINES"
count occurrence of each line, print only if it's not duplicated (count==1).

Bash: Use printf for comma seperated columns

I'm attempting to write output from ps into two comma-separated columns with custom headers which I can write to a csv file. The target format looks like:
Process ID,Command name
282,sort
280,ps
284,head
136,bash
283,awk
281,awk
Here's the command I've composed so far:
ps -o pid="Process ID" -o comm="Command name" | (read -r; printf "%s\n" "$REPLY"; sort -k2 -r)
which produces the following output:
Process ID Command name
23104 sort
24756 ps
24757 bash
19320 bash
23103 awk
I need to replace the whitespace characters in each line (except the first, which needs special processing) with commas. Is there a way to do said replacement in the printf command? Or am I approaching this wrong?
ps -o pid,comm --no-headers | awk 'BEGIN{print "Process ID,Command name"}{$1=$1}1' OFS=,
You can use the printf read combo. However, in your code, the printf prints only the first line read by read. The rest is printed by sort. while loop is your friend here:
printf 'Process ID,Command name\n'
while read -r id cmd; do
printf '%s,%s\n' "$id" "$cmd"
done < <(ps -o pid,comm --no-headers)
And to have the output of ps sorted, pipe it to sort like you did, or use the --sort option:
ps -o pid,comm --no-headers --sort -comm

nslookup capture stderr in a variable and display

In a shell script I am running nslookup on number of URLs
Sometimes some url returns cannot resolv error. I need to capture those errors in a variable.
here is code for nslookup which gets ip address returned
output=$(nslookup "$URL" | grep Add | grep -v '#' | cut -f 3 -d ' ' | awk 'NR>1' )
Now in same variable output, I want to capture the error
nslookup: can't resolve
Stdout I am capturing in a file.
I have tried different version of re-directions - 2>&1 and others but error does not get assigned to variable. I do not want the error to be re-directed to separate file but want it to be recorded in above output variable.
As long as you are using awk, you can simplify things considerably
nslookup "$URL" 2>&1 |
awk -e '/Add/ && !/#/ && NR > 1 {print $2}'
-e '/resolve|NXDOMAIN/ { print "error" }'
Where one line has been broken into three for clarity. I cannot reproduce the problem you say you have 2&>1 nor do I believe it should fail.
The redirection of stderr works when you use
output=$(nslookup "$URL" 2>&1 | grep Add | grep -v '#' | cut -f 3 -d ' ' | awk 'NR>1')
but it is futile since you filter it out immediately with the grep Add. You need to rethink your logic and what you really want. Maybe a better approach is
output=$(nslookup "$URL" 2>&1)
case $output in
(nslookup:*) ;;
(*) output=$(echo "$output" | grep Add | ...);;
esac

what does grep -v '^#' do

My program looks like this.
ALL=`cat $1 | grep -v '^#' | wc -l`
FINISHED="0"
for i in `cat $1 | grep -v '^#'`; do
echo "PROBE $i"
I will be doing some operation
FINISHED=`echo $FINISHED"+1"|bc`
I will run this script by giving a file name as parameter where a list of probes will be present.
I have 2 questions
What does grep -v '^#' mean. I learnt that '^ is usually used to matching a particular string. But in the file name which I give there is no #. Moreover I am getting the total number of probes for cat $1 | grep -v '^#' | wc -l.
echo $FINISHED"+1"|bc. Here any idea as to why the developer as added |bc?
^ means "start of line"
# is the literal character #
-v means "invert the match" in grep, in other words, return all non matching lines.
Put those together, and your expression is "select all lines that do not begin with #"
| is the pipe character, it takes the output of the command on the left hand side, and uses it as the input of the command on the right hand side. bc is like a command line calculator (to do basic math).
I would use this to exclude comments from the code I'm reading. So all comment lines start with # and I don't want to see them if there are too many of them.
grep -v '^#'
We have different ways for calculation. Pick the one which you like.
a=`echo 1+1 | bc`; echo $a
b=$((1+1)); echo $b
c=`expr 1 + 1`; echo $c
let d=1+1; echo $d

Resources