How can I retrieve numeric value from text file in shell script? - bash

below content has been written in a text file called test.txt. How can I retrieve pending & completed count value in shell script?
<p class="pending">Count: 0</p>
<p class="completed">Count: 0</p>
Here's what I tried:
#!/bin/bash
echo
echo 'Fetching job page and write to Jobs.txt file...'
curl -o Jobs.txt https://cms.test.com
completestatus=`grep "completed" /home/Jobs.txt | awk -F "<p|< p="">" '{print $2 }' | awk '{print $4 }'`
echo $completestatus
if [ "$completestatus" == 0 ]; then

grep and awk commands can almost always be combined into 1 awk command. And 2 awk commands can almost always be combined to 1 awk command also.
This solves your immediate problem (using a little awk type casting trickery).
completedStatus=$(echo "<p class="pending">Count: 0</p>^J
<p class="completed">Count: 0</p>" \
| awk -F : '/completed/{var=$2+0.0;print var}' )
echo completedStatus=$completedStatus
The output is
completedStatus=0
Note that you can combine grep and awk with
awk -F : '/completed/' test.txt
filters to just the completed line , output
<p class=completed>Count: 0</p>
When I added your -F argument, the output didn't change, i.e.
awk -F'<p|< p="">' '/completed/' test.txt
output
<p class=completed>Count: 0</p>
So I relied on using : as the -F (field separator). Now the output is
awk -F : '/completed/{print $2}'
output
0</p>
When performing a calculation, awk will read a value "looking" for a number at the front, if it finds a number, it will read the data until it finds a non-numeric (or if there is nothing left). So ...
awk -F : '/completed/{var=$2+0.0;print var}' test.txt
output
0
Finally we arrive at the solution above, wrap the code in a modern command-substitution, i.e. $( ... cmds ....) and send the output to the completedStatus= assignment.
In case you're thinking that the +0.0 addition is what is being output, you can change your file to show completed count = 10, and the output will be 10.
IHTH

another awk
completedStatus=$(awk -F'[ :<]' '/completed/{print $(NF-1)}' file)

If I got you right, you just want to extract pending or completed and the value. If that is the case,
Then Using SED,
please check out below script.Output shared via picture, please click to see
#!/bin/bash
file="$1"
echo "Simple"
cat $1 |sed 's/^.*=\"\([a-z]*\)\">Count: \([0-9]\)<.*$/\1=\2/g'
echo "Pipe Separated"
cat $1 |sed 's/^.*=\"\([a-z]*\)\">Count: \([0-9]\)<.*$/\1|\2/g'
echo "CSV Style or comma separeted"
cat $1 |sed 's/^.*=\"\([a-z]*\)\">Count: \([0-9]\)<.*$/\1,\2/g'

Related

how to pass in a variable to awk commandline

I'm having some trouble passing bash script variables into awk command-line.
Here is pseudocode:
for FILE in $INPUT_DIR/*.txt; do
filename=`echo $FILE | sed -n 's/^.*\(chr[0-9A-Z]*\).*.vcf$/\1/p'`
OUTPUT_FILE=$OUTPUT_DIR/$filename.snps.txt
egrep -v "^#" $FILE | awk '{print $2,$4,$5}' > $OUTPUT_FILE
done
The final line where I awk the columns, I would like it to be flexible or user input. For example, the user could want columns 6,7,and 8 as well, or column 133 and 138, or column 245 through 248. So how do I custom this so I can have that 'print $2 .... $5' be a user input thing? For example the user would run this script like : bash script.sh input_dir output_dir [user inputs whatever string of columns], and then I would get those columns in the output. I tried passing it in, but I guess I'm not getting the syntax right.
With awk, you should declare the variable before use it. This is better than the escape method (awk '{print $'$var'}'):
awk -v var1="$col1" -v var2="$col2" 'BEGIN {print var1,var2 }'
Where $col1 and $col2 would be the input variables.
Maybe you can try an input variable as string with "$2,$4,$5" and print this variable to get the values (I am not sure if this works)
The following test works for me:
A="\$3" ; ls -l | awk "{ print $A }"

How to get word from text file BASH

I want to get only one word from this txt file: http://pastebin.com/jFDu0Le5 . The word is from last row: WER: 45.67% Correct: 65.87% Acc: 54.33%
I want to get only the value: 45.67 to save it to the file value.txt..I want to create BASH script to get this value. Can you give me an example how to do it??? I am new in Bash and I need it for school. The whole .txt file is saved on my server as text file file.txt.
Try this:
grep WER file.txt | awk '{print $2}' | uniq | sed -e 's/%//' > value.txt
Note that this will overwrite value.txt each time you run the command.
You want grep "WER:" value.txt | cut -???
I have ??? because I do not know the structure of the file. Tab delimited? Fixed Width?
Do man cut an you can get the arguments you need.
There a many ways and instruments to do the task:
sed
tac file.txt | sed -n '/^WER: /{s///;s/%.*//;p;q}' > value.txt
awk
tac file.txt | awk -F'[ %]' '/^WER:/{print $2;exit}' > value.txt
bash
while read a b c
do
if [ $a = "WER:" ]
then
b=${b%\%*}
echo ${b#* }
break
fi
done < <(tac file.txt) > value.txt
If the format is as you said, then this also works
awk -F'[: %]' '/^WER/{print $3}' file.txt > value.txt
Explanation
-F specifies the field separator as one of [: %]
/<PATTERN>/ {<ACTION>} refers to: if a line matches some PATTERN, then do some ACTION
in my case,
the PATTERN is: starts with ^ the string WER
the ACTION is: print field $3 (as split by the -F field separators)
> sends the output to value.txt

BASH: Cannot awk with a variable in a while loop

I have a Problem when trying to awk a READ input in a while loop.
This is my code:
#!/bin/bash
read -p "Please enter the Array LUN ID (ALU) you wish to query, separated by a comma (e.g. 2036,2037,2045): " ARRAY_LUNS
LUN_NUMBER=`echo $ARRAY_LUNS | awk -F "," '{ for (i=1; i<NF; i++) printf $i"\n" ; print $NF }' | wc -w`
echo "you entered $LUN_NUMBER LUN's"
s=0
while [ $s -lt $LUN_NUMBER ];
do
s=$[$s+1]
LUN_ID=`echo $ARRAY_LUNS | awk -F, '{print $'$s'}' | awk -v n1="$s" 'NR==n1'`
echo "NR $s :"
echo "awk -v n1="$s" 'NR==n1'$LUN_ID"
done
No matter what options with awk i try, i dont get it to display more than the first entry before the comma. It looks to me, like the loop has some problems to get the variable s counted upwards. But on the other hand, the code line:
LUN_ID=`echo $ARRAY_LUNS | awk -F, '{print $'$s'}' | awk -v n1="$s" 'NR==n1'`
works just great! Any idea on how to solve this. Another solution to my READ input would be just fine as well.
#!/bin/bash
typeset -a ARRAY_LUNS
IFS=, read -a -p "Please enter the Array LUN ID (ALU) you wish to query, separated by a comma (e.g. 2036,2037,2045): " ARRAY_LUNS
LUN_NUMBER="${#ARRAY_LUNS[#]}"
echo "you entered $LUN_NUMBER LUNs"
for((s=0;s<LUN_NUMBER;s++))
do
echo "LUN id $s: ${ARRAY_LUNS[s]}"
done
Why does your awk code not work?
The problem is not the counter. I said The last awk command in the pipe i.e.
awk -v n1="$s" 'NR==n1'.
This awk code tries to print the first line when s is 1, the second line when s is 2, the third line when s is 3, and so on... But how many lines are printed by echo $ARRAY_LUNS? Just ONE... there is no second line, no third line... just ONE line and just ONE line is printed.
That line contains all LUN_IDs in ONE LINE, i.e, one LUN_ID next to another LUN_ID, like this way:
34 45 21 223
NOT this way
34
45
21
223
Those LUN_IDs are fields printable by awk using $1, $2, $3, ... and so on.
Therefore if you want you code to run fine just remove that last command in the pipe:
LUN_ID=$(echo "$ARRAY_LUNS" | awk -F, '{print $'$s'}')
Please, for any further question, firstly read this awk guide

Make grep output more readable

I'm working with grep to patterns in files with grep -orI "id=\"[^\"]\+\"" . | sort | uniq -d
Which gives an output like the following:
./myFile.html:id="matchingR"
./myFile.html:id="other"
./myFile.html:id="cas"
./otherFile.html:id="what"
./otherFile.html:id="wheras"
./otherFile.html:id="other"
./otherFile.html:id="whatever"
What would be a convenient way to pipe this an have the following as output:
./myFile.html
id="matchingR"
id="other"
id="cas"
./otherFile.html
id="what"
id="wheras"
id="other"
id="whatever"
Basically group results by filename.
Not the prettiest but it works.
awk -F : -v OFS=: 'f!=$1 {f=$1; print f} f==$1 {$1=""; $0=$0; sub(/^:/, " "); print}'
If none of your lines can ever contain a colon then this simpler version also works.
awk -F : 'f!=$1 {f=$1; print f} f==$1 {$1=""; print}'
These both split fields on colons (-F :) print out the first field (filename) when it differs from a saved value (and save the new value) and when the first field matches the saved value they remove the first field and print. They differ in how they remove the field and print the output. The first attempts to preserve colons in the matched line. The second (and #fedorqui's version ... f==$1 {$0=$2; print}) assume no other colons were on the line to begin with.
Pass output to this script:
#!/bin/sh
sed 's/:/ /' | while read FILE TEXT; do
if [ "$FILE" = "$GROUP" ]; then
echo " $TEXT"
else
GROUP="$FILE"
echo "$FILE"
echo " $TEXT"
fi
done
Here is an short awk
awk -F: '{print ($1!=f?$1 RS:""),$2;f=$1}' file
./myFile.html
id="matchingR"
id="other"
id="cas"
./otherFile.html
id="what"
id="wheras"
id="other"
id="whatever"

Adding a single date to the first column of a file

I have a file that looks like
1234-00AA12 .02
5678-11BB34 .03
In a bash script I have an expression like
day=$(...)
that greps a date in the format YYYY/MM/DD (if this matters), let's say 2014/01/21 for specificity.
I want to produce the following:
2014/01/21,1,1,1234,00AA12,.02
2014/01/21,1,1,5678,11BB34,.03
(The first column is the day, the second and third columns are fixed as "1").
After a bit of googling I tried:
cat file|awk -F "-" '{split($2,array," "); printf "%s,%s,%s,%s,%s,%s\n",$day,"1","1",$1,array[1],array[2]}'> output.csv
but $day isn't working with awk.
Any help would be appreciated.
Try this awk:
awk -v d=$(date '+%Y/%m/%d') '{print d,1,1,$1,$2}' OFS=, file
2014/02/07,1,1,1234-00AA12,.02
2014/02/07,1,1,5678-11BB34,.03
$ awk -v day="$day" 'BEGIN{FS="[ -]";OFS=","} {print day,1,1,$1,$2,$3}' file
2014/01/21,1,1,1234,00AA12,.02
2014/01/21,1,1,5678,11BB34,.03
awk wouldn't understand shell variables. You need to pass those to it:
awk -vdd="$day" -F "-" '{split($2,array," "); printf "%s,%s,%s,%s,%s,%s\n",dd,"1","1",$1,array[1],array[2]}'
Moreover, rather than saying:
cat file | awk ...
avoid the useless use of cat:
awk file
With bash
day="2014/01/21"
(
IFS=,
while IFS=" -" read -ra fields; do
new=( "$day" 1 1 "${fields[#]}" )
echo "${new[*]}"
done < file
)
2014/01/21,1,1,1234,00AA12,.02
2014/01/21,1,1,5678,11BB34,.03
I run the while loop in a subshell just to keep changes to IFS localized.

Resources