How to get the 'variable' line from file? [duplicate] - bash

This question already has answers here:
Bash tool to get nth line from a file
(22 answers)
Closed 7 years ago.
This is my script. It print every row in the file with the number of row.
Next i want to read which row user choosed and save it to some variable.
I=1
for ROW in $(cat file.txt)
do
echo "$I $ROW"
I=`expr $I + 1`
done
read var
awk 'FNR = $var {print}' file.txt
Then i want to to print / save the chosen row into the file.
How can I do this ?
when i echo $var it shows me properly the number. But when i'm trying to use this variable in awk, it print every line.
How to read the 'var' line from file?
And moreover, how to save this line in other variable?
Example file.txt
1 line1
2 line2
3 line3
4 line4
when i tap 3 i want to read third line from file.

Try this:
cat -n file.txt; read var; line="$(sed -n ${var}p file)"; echo "$line"
With more focus on Dryingsoussage's version:
#!/bin/bash
file="file.txt"
declare -i counter=0 # set integer attribute
var=0
while read -r line; do
counter=counter+1
printf "%d %s\n" "$counter" "$line"
done < "$file"
# check for number and greater-than 0 and less-than-or-equal $counter
until [[ $var =~ ^[0-9]+$ ]] && [[ $var -gt 0 ]] && [[ $var -le $counter ]]; do
read -p "Enter line number:" var
done
awk -v var="$var" 'FNR==var {print}' "$file"

You cannot use $varname inside ' ' they will not be resolved.
look at this other post it should help you:
How to use shell variables in an awk script

cat -n file.txt
read var
row="$(awk -v tgt="$var" 'NR==tgt{print;exit}' file.txt)"

First: You cannot use $var in a single quotes, as echo '$var' would be plain $var, no its value.
Second: You used = (assignment) operator instead of == (equality) operator.
Third: You don't have to write { print } if you want the line to be printed. You can write nothing instead.
Fourth: As was explained in the deleted comment below - do not allow bash expanding the variables in the awk script code, as it can lead to code injection.
So conclusion is:
awk -v var="$var" 'FNR == var' file.txt
should do what you want.

Related

Detect double new lines with bash script

I am attempting to return the line number of lines that have a break. An input example:
2938
383
3938
3
383
33333
But my script is not working and I can't see why. My script:
input="./input.txt"
declare -i count=0
while IFS= read -r line;
do
((count++))
if [ "$line" == $'\n\n' ]; then
echo "$count"
fi
done < "$input"
So I would expect, 3, 6 as output.
I just receive a blank response in the terminal when I execute. So there isn't a syntax error, something else is wrong with the approach I am taking. Bit stumped and grateful for any pointers..
Also "just use awk" doesn't help me. I need this structure for additional conditions (this is just a preliminary test) and I don't know awk syntax.
The issue is that "$line" == $'\n\n' won't match a newline as it won't be there after consuming an empty line from the input, instead you can match an empty line with regex pattern ^$:
if [[ "$line" =~ ^$ ]]; then
Now it should work.
It's also match easier with awk command:
$ awk '$0 == ""{ print NR }' test.txt
3
6
As Roman suggested, line read by read terminates with a delimiter, and that delimiter would not show up in the line the way you're testing for.
If the pattern you are searching for looks like an empty line (which I infer is how a "double newline" always manifests), then you can just test for that:
while read -r; do
((count++))
if [[ -z "$REPLY" ]]; then
echo "$count"
fi
done < "$input"
Note that IFS is for field-splitting data on lines, and since we're only interested in empty lines, IFS is moot.
Or if the file is small enough to fit in memory and you want something faster:
mapfile -t -O1 foo < i
declare -p foo
for n in "${!foo[#]}"; do
if [[ -z "${foo[$n]}" ]]; then
echo "$n"
fi
done
Reading the file all at once (mapfile) then stepping through an array may be easier on resources than stepping through a file line by line.
You can also just use GNU awk:
gawk -v RS= -F '\n' '{ print (i += NF); i += length(RT) - 1 }' input.txt
By using FS = ".+", it ensures only truly zero-length (i.e. $0 == "") line numbers get printed, while skipping rows consisting entirely of [[:space:]]'s
echo '2938
383
3938
3
383
33333' |
{m,g,n}awk -F'.+' '!NF && $!NF = NR'
3
6
This sed one-liner should do the job at once:
sed -n '/^$/=' input.txt
Simply writes the current line number (the = command) if the line read is empty (the /^$/ matches the empty line).

How to write to a file in linux upon a condition [duplicate]

This question already has answers here:
Appending a line to a file only if it does not already exist
(25 answers)
Closed 2 years ago.
This might be so simple but I need to do this:
I have a file name a.sh, and it contains few lines as below
# firstname banuka
firstcolor green
lastname jananath
# age 25
And I want to write firstname banuka to this file, so it would look like
echo "firstname banuka" > a.sh
BUT, before writing firstname banuka I want to check if file already has that value (or line)
As you can see in the file content of a.sh, the part we are going to write (firstname banuka) can be already there but with a comment.
So if it has a comment,
1. I want to un-comment it (remove `#` in front of `firstname banuka`)
If no comment and no line which says `firstname banuka`,
2. Add the line `firstname banuka`
If no comment and line is already there,
3. skip (don't write `firstname banuka` part to file)
Can someone please help me?
string="firstname banuka"
file=./a.sh
grep -qwi "^[^[:alnum:]]*$string$" "$file" && \
sed -i "s,\(^[^[:alnum:]]*\)\($string$\),\2,i" "$file" || \
printf "\n%b\n" "$string" >> "$file"
You need to use a programming language that can scan for patterns, has variables, and has conditional contructs. awk is one such language:
awk -v text="firstname banuka" '
$0 ~ text {
found = 1 # remember that we have seen it
sub(/^ *# */, "") # remove the comment, if there is one
}
{print}
END {if (!found) {print text}}
' file
This will not edit the file, just print it out. With GNU awk, if you want to edit the file in-place:
gawk -i inplace -v text="..." '...' file
With plain bash you would write
found=false
while IFS= read -r line; do
if [[ "$line" == *"firstname banuka"* ]]; then
found=true
if [[ "$line" =~ ^[[:blank:]]*[#][[:blank:]]*(.+) ]]; then
line="${BASH_REMATCH[1]}"
fi
fi
echo "$line"
done < file
$found || echo "firstname banuka"
Alternative Bash implementation
#!/usr/bin/env bash
found=false
while IFS= read -r line; do
if [[ "$line" =~ ^([[:blank:]]*[#][[:blank:]]*)?(firstname banuka) ]]; then
echo "${BASH_REMATCH[2]}"
found=true
else
echo "$line"
fi
done < a.txt
$found || echo "firstname banuka"

How to browse a line from a file?

I have a file that contains 10 lines with this sort of content:
aaaa,bbb,132,a.g.n.
I wanna walk throw every line, char by char and put the data before the " , " is met in an output file.
if [ $# -eq 2 ] && [ -f $1 ]
then
echo "Read nr of fields to be saved or nr of commas."
read n
nrLines=$(wc -l < $1)
while $nrLines!="1" read -r line || [[ -n "$line" ]]; do
do
for (( i=1; i<=$n; ++i ))
do
while [ read -r -n1 temp ]
do
if [ temp != "," ]
then
echo $temp > $(result$i)
else
fi
done
paste -d"\n" $2 $(result$i)
done
nrLines=$($nrLines-1)
done
else
echo "File not found!"
fi
}
In parameter $2 I have an empty file in which I will store the data from file $1 after I extract it without the " , " and add a couple of comments.
Example:
My input_file contains:
a.b.c.d,aabb,comp,dddd
My output_file is empty.
I call my script: ./script.sh input_file output_file
After execution the output_file contains:
First line info: a.b.c.d
Second line info: aabb
Third line info: comp
(yes, without the 4th line info)
You can do what you want very simply with parameter-expansion and substring-removal using bash alone. For example, take an example file:
$ cat dat/10lines.txt
aaaa,bbb,132,a.g.n.
aaaa,bbb,133,a.g.n.
aaaa,bbb,134,a.g.n.
aaaa,bbb,135,a.g.n.
aaaa,bbb,136,a.g.n.
aaaa,bbb,137,a.g.n.
aaaa,bbb,138,a.g.n.
aaaa,bbb,139,a.g.n.
aaaa,bbb,140,a.g.n.
aaaa,bbb,141,a.g.n.
A simple one-liner using native bash string handling could simply be the following and give the following results:
$ while read -r line; do echo ${line%,*}; done <dat/10lines.txt
aaaa,bbb,132
aaaa,bbb,133
aaaa,bbb,134
aaaa,bbb,135
aaaa,bbb,136
aaaa,bbb,137
aaaa,bbb,138
aaaa,bbb,139
aaaa,bbb,140
aaaa,bbb,141
Paremeter expansion w/substring removal works as follows:
var=aaaa,bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the first ',' is:
${var#*,} # bbb,132,a.g.n.
Beginning at the left and removing up to, and including, the last ',' is:
${var##*,} # a.g.n.
Beginning at the right and removing up to, and including, the first ',' is:
${var%,*} # aaaa,bbb,132
Beginning at the left and removing up to, and including, the last ',' is:
${var%%,*} # aaaa
Note: the text to remove above is represented with a wildcard '*', but wildcard use is not required. It can be any allowable text. For example, to only remove ,a.g.n where the preceding number is 136, you can do the following:
${var%,136*},136 # aaaa,bbb,136 (all others unchanged)
To print 2016 th line from a file named file.txt u have to run a command like this-
sed -n '2016p' < file.txt
More-
sed -n '2p' < file.txt
will print 2nd line
sed -n '2011p' < file.txt
2011th line
sed -n '10,33p' < file.txt
line 10 up to line 33
sed -n '1p;3p' < file.txt
1st and 3th line
and so on...
For more detail, please have a look in this tutorial and this answer.
In native bash the following should do what you want, assuming you replace the contents of your script.sh with the below:
#!/bin/bash
IN_FILE=${1}
OUT_FILE=${2}
IFS=\,
while read line; do
set -- ${line}
for ((i=1; i<=${#}; i++)); do
((${i}==4)) && continue
((n+=1))
printf '%s\n' "Line ${n} info: ${!i}"
done
done < ${IN_FILE} > ${OUT_FILE}
This will not print the 4th field of each line within the input file, on a new line in the output file (I assume this is your requirement as per your comment?).
[wspace#wspace sandbox]$ awk -F"," 'BEGIN{OFS="\n"}{for(i=1; i<=NF-1; i++){print "line Info: "$i}}' data.txt
line Info: a.b.c.d
line Info: aabb
line Info: comp
This little snippet can ignore the last field.
updated:
#!/usr/bin/env bash
if [ ! -f "$1" -o $# -ne 2 ];then
echo "Usage: $(basename $0) input_file out_file"
exit 127
fi
input_file=$1
output_file=$2
: > $output_file
if [ "$(wc -l < $1)" -ne 0 ];then
while true
do
read -r -n1 char
if [ "$char" == "" ];then
break
elif [ $char != "," ];then
temp=$temp$char
else
echo "line info: $temp" >> $output_file
temp=""
fi
done < $input_file
else
echo "file $1 is empty"
fi
Maybe this is what you want
Did you try
sed "s|,|\n|g" $1 | head -n -1 > $2
I assume that only the last word would not have a comma on its right.
Try this (tested with you sample line) :
#!/bin/bash
# script.sh
echo "Number of fields to save ?"
read nf
while IFS=$',' read -r -a arr; do
newarr=${arr[#]:0:${nf}}
done < "$1"
for i in ${newarr[#]};do
printf "%s\n" $i
done > "$2"
Execute script with :
$ ./script.sh inputfile outputfile
Number of fields ?
3
$ cat outputfile
a.b.c.d
aabb
comp
All words separated with commas are stored into an array $arr
A tmp array $newarr removes last $n element ($n get the read command).
It loops over new array and prints result in $2, the outputfile.

bash Shell: lost first element data partially

Using bash shell:
I am trying to read a file line by line.
and every line contains two meaning full file names delimited by "``"
file:1 image_config.txt
bbbbb.mp4``thumb/hashdata.gif
bbbbb.mp4``thumb/hashdata2.gif
Shell Script
#!/bin/bash
filename="image_config.txt"
while IFS='' read -r line || [[ -n "$line" ]]; do
IFS='``' read -r -a array <<< "$line"
if [ "$line" = "" ]; then
echo lineempty
else
file=${array[0]}
hash=${array[2]}
echo $file$hash;
output=$(ffmpeg -v warning -ss 2 -t 0.8 -i $file -vf scale=200:-1 -gifflags +transdiff -y $hash);
echo $output;
# echo ${array[0]}${array[1]}${array[2]}
fi;
done < "$filename"
first time executed successfully but when loop executes second time.
variable file lost bbbbb from bbbbb.mp4
and following output comes out
Output :
user#domain [~/public_html/Videos]$ sh imager.sh
bbbbb.mp4thumb/hashdata.gif
.mp4thumb/hashdata2.gif
.mp4: No such file or directory
lineempty
Please check out Bash FAQ 89 - I'm using a loop which runs once per line of input but it only seems to run once; everything after the first line is ignored? which seems to be helpful in your case.
Aside:
There is no point in using the same character twice in IFS.
IFS=\`
Is enough.
Check out this:
var='abc``def'
IFS=\`\` read -ra arr <<< "$var"
printf '<%s>\n' "${arr[#]}"
Output:
<abc>
<>
<def>
As you can see, arr[0] is abc, arr[1] is empty and arr[2] is def, and not arr[0] is abc and arr[1] is def as one might expect.
Taken from the IFS wiki of Greycat and Lhunath Bash Guide :
The IFS variable is used in shells (Bourne, POSIX, ksh, bash) as the input field separator (or internal field separator). Essentially, it is a string of special characters which are to be treated as delimiters between words/fields when splitting a line of input.
Here is how you could do differently, avoiding a read in the read:
#!/bin/bash
filename="image_config.txt"
while IFS='' read -r line || [[ -n "$line" ]]; do
if [ "$line" = "" ]; then
echo lineempty
else
file=$( echo ${line} | awk -F \` ' { print $1 } ' )
hash=$( echo ${line} | awk -F \` ' { print $3 } ' )
echo $file$hash;
output=$(ffmpeg -v warning -ss 2 -t 0.8 -i $file -vf scale=200:-1 -gifflags +transdiff -y $hash);
echo $output;
fi;
done < "$filename"

shell bash - How to split a string acoring to a delimeter and echo each substring to a file [duplicate]

This question already has answers here:
How do I split a string on a delimiter in Bash?
(37 answers)
Closed 9 years ago.
Hi I am trying to split a string am getting from a file, using the delimiter "<" I then want to echo each string to a file. I am sort of there, but am not sure how to best split the string and then loop echo each substring (there may be up to 10 substrings) I am guessing I need to create an array to store these strings and then have a loop to echo each value?
Here is what I have so far:
while read line
do
# ceck if the line begins with client_values=
if[["$line" == *client_values=*]]
CLIENT_VALUES = 'echo "${line}" | cut -d'=' -f 2'
#Now need to split the CLIENT_VALUES using "<" as a delimeter.
# for each substring
echo "Output" >> ${FILE}
echo "Value $substring" >> ${FILE}
echo "End" >> ${FILE}
done < ${PROP_FILE}
grep '^client_values=' < "${PROP_FILE}" | while IFS='=' read name value
do
IFS='<' read -ra parts <<< "$value"
for part in "${parts[#]}"
do
echo "Output"
echo "Value $part"
echo "End"
done >> "${FILE}"
done
One line awk might be simpler here (and you get the added bonus of having the angry face regex separator =<)
$ awk -F "[=<]" '/^client_values/ { print "Output"; for (i = 2; i <= NF; i++) print "Value " $i; print "End"}' input.txt >> output.txt
$ cat input.txt
client_values=firstvalue1<secondvalue2<thirdvalue3
some text
client_values=someothervalue1<someothervalue2
$ cat output.txt
Output
Value firstvalue1
Value secondvalue2
Value thirdvalue3
End
Output
Value someothervalue1
Value someothervalue2
End
Your answer could probably also work, I think with minimal modification, you would want something like
#!/bin/bash
PROP_FILE='input.txt'
FILE="output2.txt"
while read line
do
# ceck if the line begins with client_values=
if [[ "$line" == "client_values="* ]]
then
CLIENT_VALUES=`echo "${line}" | cut -d'=' -f 2`
IFS='<' read -ra CLIENT_VALUES <<< "$CLIENT_VALUES"
for substring in "${CLIENT_VALUES[#]}"; do
echo "Output" >> "${FILE}"
echo "Value $substring" >> "${FILE}"
echo "End" >> "${FILE}"
done
fi
done < "${PROP_FILE}"
Which produces
$ cat output2.txt
Output
Value firstvalue1
End
Output
Value secondvalue2
End
Output
Value thirdvalue3
End
Output
Value someothervalue1
End
Output
Value someothervalue2
End
Though again, not sure if that's what you want or not.

Resources