Bash - grep & echo only linenumbers of occurence - bash

I am struggling with only displaying an errormessage with the linenumbers.
e.g.
ERROR: Rule19: Tunerparams and/or CalcInternal in Script at 13, 15, 22
Could you please check and help me to get it right (I am very new to this)
checkCodingRule19()
{
grep -En "TunerParams|CalcInternal" $INPUT_FILE &&
echo "error: ´Rule 19: Tunerparams and/or Calicinternal in Script at $line"
}

Instead of grep you can use this simple awk script:
awk '(NR==13 || NR==15 || NR==22) && /TunerParams|CalcInternal/' file.log
NR==13 || NR==15 || NR==22 will execute this command only for line numbers 13, 15 & 22
/TunerParams|CalcInternal/ will search for these patterns in a line
Better to check for line numbers first to avoid regex search in each line.

line=`awk '$0 ~ /Tunerparams|CalcInternal/ {printf NR ", " }' < $INPUT_FILE | sed "s/, $//"`
echo "error: Rule 19: Tunerparams and/or Calicinternal in Script at $line"
Technical Explanation
Use awk to search for Tunerparams or CalcInternal in $INPUT_FILE. Print NR, the line number, each time a match is made. Append a ", ". Pipe the output to sed to trim the last comma. $line now has the comma-delimited list of numbers. So simply echo it out.
I noticed there is a "´" in your echo statement which probably doesn't belong.

Related

How can i make a if statement inside a while loop to perform certain actions only on certain lines?

I'm trying to write a simple bash script launched in sh that allows me to create a new output file starting from an input file and keeping each line starting with ">" in its position, while for every line that does not satisfy this requirement, it must delete every third character and then hanging it in the new file.
input file:
>0197_16S
-AAAAACATGTCCTCTTGTTTATA-----TNTGAGGTTTGACCTGCCCTATG--A---
>0688_16S
-----ACATCTTCTCTTGAGTTAT-----TTTGAGATATGACCTGCCCAATG--A-T-
.
.
.
.
sh script:
while IFS= read line; do
if [ "$line" = ">"* ]; then echo "$line" >> output.txt
else
var=$(echo "$line" | awk -vFS= '{for (i = 1; i <=NF; i+3) {printf $i(i+1)} printf "\n"}');
echo "$var" >> output.txt
fi;
done <foo.txt
the else statement seems to work, however the condition of the if is never verified, eliminating every third character also from the lines that begin with the character ">".
actual output:
>09716
-AAACAGTCTTTTTAT----NTAGTTGACTCCTAG-A--
>08816
----CACTCTTTAGTA----TTAGTAGACTCCAAG-A--
.
.
.
expected output:
>0197_16S
-AAACAGTCTTTTTAT----NTAGTTGACTCCTAG-A--
>0688_16S
----CACTCTTTAGTA----TTAGTAGACTCCAAG-A--
.
.
.
Try avoiding a while-loop.
Without the condition keeping each line starting with ">" in its first position you can do
sed -r 's/(..)./\1/g' foo.txt
Add a condition for the lines with > can be done by changing all lines that don't match
sed -r '/^>/ !s/(..)./\1/g' foo.txt
Or with awk:
awk '/^>/ {print;next} {print gensub(/(..)./,"\\1", "g")}' foo.txt

awk command returns inconsistent result

I have a script which performs several validations in a file.
One of the validation is to check if a line exceeds 95 characters (including spaces). If it does, it will return the line number/s where the line exceeds 95 characters. Below is the command used for the validation:
$(cat $FileName > $tempfile)
totalCharacters=$(awk '{print length($0);}' $tempFile | grep -vn [9][4])
if [[$totalCharacters != "" ]]; then
totalCharacters=$(awk '{print length($0);}' $tempFile | grep -vn [9][4] | cut -d : -f1 | tr "\n" ",")
echo "line too long"
exit 1
fi
In the lower environment the code is working as expected. But in production, there are time that the validation returns the error "line too long" but does not return any line number. We just reran the script and it does not return any error.
I would like to know what could be wrong in the command used.
The previous developer who worked on this said that it could be an issue with the use of the awk command, but I am not sure since this is the first time I have encountered using the awk command.
This is not well written. All you need is:
awk 'length>94{print NR; f=1} END{exit f}' file
If there are lines longer than 94 chars, it will print the line numbers and the exit with status 1; otherwise the exit status will be 0 and no output will be generated.
Your script should just be:
awk '
length()>95 { printf "%s%d", (c++?",":""), NR }
END { if (c) {print "line too long"; exit 1} }
' "$FileName"

How can I remove a line that contains a variable in bash?

I know some people may say the solution is with sed but it didn't work for me
so, the thing is that I read a var with read var then I want to know how to control if that var exists in a column specified by me of my archive, and if it doesnt just keep asking Please enter a valid code, and if its correct just delete that line. thanks
CODE
read var
sed -i '/$var/d' file.txt
And i want to put some short of tester that confirm if u put a valid code or not.
The structure of the file is
code;Name;Surname
There's no spaces or odd bits to parse, so sed needs no single quotes here:
read var
sed -i /"$var"/d file.txt
And a demo -- make a list from 1 to 3, remove 2:
seq 3 > three.txt; var=2; sed -i /"$var"/d three.txt ; cat three.txt
Outputs:
1
3
The following used awk to search and remove lines which first column is $code. If a line is removed then awk will exit successfully and break will be called.
file="input_file"
while :; do
echo "Enter valid code:"
read -r code
[ -n "$code" ] || continue
awk -F';' -v c="$code" '$1 == c {f=1;next}1;END{exit(f?0:1)}' \
"$file" > "$file.out" && break
done
mv "$file.out" "$file"
This will continue to ask for a code in $file until the user enters a valid code at which point $file.out is created and the iteration broken.
Then file.out is renamed to file. Technically $file.out is created each time.
You can have $var or ${var}expanded with
read var
sed -i '/'${var}'/d' file.txt
But what will happen when $var has a space? Nothing good, so use double quotes as well:
read var
sed -i '/'"${var}"'/d' file.txt
I was trying with awk in below file :
$cat test.txt
code;Name;Surname
xyz;n1;s1
abc;dd;ff
xyz;w;t
abc;ft;op
It will print the lines that is going to delete .But I am not able to figure out how to delete the line from awk after printing the info .
$var=xyz | awk -v var="$var" -F ";" '{ if ($1 == var ) print "FOUND " var " And Going to delete the line" NR " " $0 ; }' test.txt
FOUND xyz And Going to delete the line2 xyz;n1;s1
FOUND xyz And Going to delete the line4 xyz;w;t
Below command will display the info and will delete the date . But it will take 2 parsing of the input file .
$var=xyz | awk -v var="$var" -F ";" '{ if ($1 == var ) print "FOUND " var " And Going to delete the line" NR " " $0 ; }' test.txt && awk -v var="$var" -F ";" '($1 != var)' test.txt > _tmp_ && mv _tmp_ test.txt
FOUND xyz And Going to delete the line2 xyz;n1;s1
FOUND xyz And Going to delete the line4 xyz;w;t
New File after deletion :
$cat test.txt
code;Name;Surname
abc;dd;ff
abc;ft;op

How to pad out values line by line while mainting overall record length in a Unix Shell script ksh

IFS=$'\n'
while read -r line
do
--header/trailer record
if echo ${line} | grep -e '000000000000000' -e '999999999999999' >/dev/null 2>&1
then
echo ${line} >> outfile.01.DAT.sampleNEW
elif echo ${line} | grep '+0' >/dev/null 2>&1
then
echo ${line} | sed -e 's/+/+00000000/; s/ X/X/' >> outfile.01.DAT.sampleNEW
else
echo ${line} | sed -e 's/-/-00000000/; s/ X/X/' >> outfile.01.DAT.sampleNEW
fi
done < Inputfile.01.DAT
I have a large file that I need to pad out the amount fields (signed) but retain the overall record length so have to remove some filler spaces at the end (each line ends with X). The file has a header/trailer that does not need to change. I have come up with a way but it is very slow when using a large input file. I am sure the use of grep here is not good.
Sample records. end with X - Overall length 107 bytes
000000000000000PPPPPPPPP Information INV TRANSACTION 0120160505201605052154HI203.SEQ 01 X
000000000000001PPPPP14PA 000YYYYYY488 -0001235.2520150319 X
000000000000002PPPMS PA 000RRRRR4539 +0008285.0020160301 X
000000000000003PPPP506 000TTTTTT605 -0000225.0020150608 X
9999999999999990000000000000439.940000000079802782.180000005 X
I suspect you want something like this, but it is very hard to tell given the way you have presented your question:
awk '
/000000000000000/ || /999999999999999/ {print;next}
/\+0/ {sub(/\+0/,"+00000000"); sub(/ X/,'X'); print; next}
/\-0/ {sub(/\-0/,"-00000000"); sub(/ X/,'X'); print; next}
' Inputfile.01.DAT
That says... "if the line contains a string of 15 zeroes or 15 nines, print it and move to the next line. If the line contains +0, replace it with +00000000 and remove 8 spaces before the final X, then print. Likewise for -0."
You could also maybe use Perl, and do something like this:
perl -nle '/0{15}|9{15}/ && print; s/([+-])0/$1\0000000000/ && s/ X/X/ && print' Inputfile.01.DAT

Find special character in last line of text file

I have a text file like this (e.g., a.txt):
1.1.t
1.2.m
If the last line consists of the character m, I want to echo Ok.
I tried this:
line=` awk '/./{line=$0} END{print line}' a.txt`
line1= `echo $line | grep "m"`
if [[ $line1= `:` ]] ; then
echo
else
echo "Ok"
fi
It does not work, and the error is:
bash: conditional binary operator expected
bash: syntax error near ``:`,
`if [[ $line1= `:` ]] ; then'
if [[ $line1=:]] is incorrect syntax in couple of ways as spaces around = are missing and backtick is used for command substitution
awk itself can handle this:
awk '/./{line=$0} END{print (line ~ /\.m/)? "ok" : "no"}' file
ok
You could also use tail and grep:
[[ -n $(tail -1 a.txt | grep "m$") ]] && echo "OK" || echo "FAILED"
You can use sed:
sed -n '${/m$/s/.*/OK/p;}' file
The option -n suppresses output by default. $ addresses the last line of input. In that case we check if the line ends with m through /m$/. If that is the case we substitute the line with the word OK and print it.
Btw, I was going trough your shell script, there are really too many errors to explain, the syntax error is because there is no space between $line1 and the = in the [[ ... ]] conditional. But hey, this is far from being the only problem with that script. ;)
http://www.shellcheck.net/ might be a good resource to enhance your scripts.

Resources