For Loop in Shell Script - add breakline in csv file - bash

I'm trying to use a for loop in a shell script.
I am executing a command from a text file. I wish to execute each command 10 times and insert some stats into a csv file. After that command has been done, I want to start the next BUT put a line break in the CSV file after the first command that was done 10 times.
Is the following correct:
#Function processLine
processLine(){
line="$#"
for i in 1 2 3 4 5 6 7 8 9 10
do
START=$(date +%s.%N)
echo "$line"
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
done
}
Thanks all for any help
UPDATE
It is doing the loop correctly, but I can't get it to add a line break after each command is executed 10 times.

processLine()
{
line="$#"
echo $line >> test_file
for ((i = 1; i <= 10 ; i++))
do
# do not move to the next line
echo -n "$i," >> test_file
done
# move to the next line: add the break
echo >> test_file
}
echo -n > test_file
processLine 'ls'
processLine 'find . -name "*"'

How about just adding a line "echo >> file.csv" after done? Or do you only want an empty line between each block of 10? Then you could do the following:
FIRST=1
processline()
{
if (( FIRST )) then
FIRST = 0
else
echo >> file.csv
fi
...rest of code...
}
Otherwise you might want to give an example of the desired output and the output you are getting now.

It looks reasonable. Does it do what you want it to?
You could simplify some things, e.g.
DIFF=$(echo "$END - $START" | bc)
could be just
DIFF=$((END - START))
if END and START are integers, and there's no need to put things in variables if you're only going to use them once.
If it's not doing what you want, edit the question to describe the problem (what you see it doing and what you'd rather have it do).

Related

issue with if statement in bash

I have issue with an if statement. In WEDI_RC is saved log file in the following format:
name_of_file date number_of_starts
I want to compare first argument $1 with first column and if it is true than increment number of starts. When I start my script it works but just with one file, eg:
file1.c 11:23:07 1
file1.c 11:23:14 2
file1.c 11:23:17 3
file1.c 11:23:22 4
file2.c 11:23:28 1
file2.c 11:23:35 2
file2.c 11:24:10 3
file2.c 11:24:40 4
file2.c 11:24:53 5
file1.c 11:25:13 1
file1.c 11:25:49 2
file2.c 11:26:01 1
file2.c 11:28:12 2
Every time when I change file it begin counts from 1. I need to continue with counting when it ends.
Hope you understand me.
while read -r line
do
echo "line:"
echo $line
if [ "$1"="$($line | grep ^$1)" ]; then
number=$(echo $line | grep $1 | awk -F'[ ]' '{print $3}')
else
echo "error"
fi
done < $WEDI_RC
echo "file"
((number++))
echo $1 `date +"%T"` $number >> $WEDI_RC
There are at least two ways to resolve the problem. The most succinct is probably:
echo "$1 $(date +"%T") $(($(grep -c "^$1 " "$WEDI_RC") + 1))" >> "$WEDI_RC"
However, if you want to have counts for each file separately, you can do that using an associative array, assuming you have Bash version 4.x (not 3.x as is provided on Mac OS X, for example). This code assumes the file is correctly formatted (so that the counts do not reset to 1 each time the file name changes).
declare -A files # Associative array
while read -r file time count # Split line into three variables
do
echo "line: $file $time $count" # One echo - not two
files[$file]="$count" # Record the current maximum for file
done < "$WEDI_RC"
echo "$1 $(date +"%T") $(( ${files[$1]} + 1 ))" >> "$WEDI_RC"
The code uses read to split the line into three separate variables. It echoes what it read and records the current count. When the loop's done, it echoes the data to append to the file. If the file is new (not mentioned in the file yet), then you will get a 1 added.
If you need to deal with the broken file as input, then you can amend the code to count the number of entries for a file, instead of trusting the count value. The bare-array reference notation used in the (( … )) operation is necessary when incrementing the variable; you can't use ${array[sub]}++ with the increment (or decrement) operator because that evaluates to the value of the array element, not its name!
declare -A files # Associative array
while read -r file time count # Split line into three variables
do
echo "line: $file $time $count" # One echo - not two
((files[$file]++)) # Count the occurrences of file
done < "$WEDI_RC"
echo "$1 $(date +"%T") $(( ${files[$1]} + 1 ))" >> "$WEDI_RC"
You can even detect whether the format is in the broken or fixed style:
declare -A files # Associative array
while read -r file time count # Split line into three variables
do
echo "line: $file $time $count" # One echo - not two
if [ $((files[$file]++)) != "$count" ]
then echo "$0: warning - count out of sync: ${files[$file]} vs $count" >&2
fi
done < "$WEDI_RC"
echo "$1 $(date +"%T") $(( ${files[$1]} + 1 ))" >> "$WEDI_RC"
I don't get exactly what you want to achieve with your test [ "$1"="$($line | grep ^$1)" ] but it seems you are checking that the line start with the first argument.
If it is so, I think you can either:
provide the -o option to grep so that it print just the matched output (so $1)
use [[ "$line" =~ ^"$1" ]] as test.

Reading multiple lines to redirect

currently I have a file named testcase and inside that file has 5 10 15 14 on line one and
10 13 18 22 on line two
I am trying to bash script to take those two inputs line by line to test into a program. I have the while loop comment out but I feel like that is going in the right direction.
I was also wondering if is possible to know if I diff two files and they are the same return true or something like that because I dont now if [["$youranswer" == "$correctanswer"]] is working the way I wanted to. I wanted to check if two contents inside the files are the same then do a certain command
#while read -r line
#do
# args+=$"line"
#done < "$file_input"
# Read contents in the file
contents=$(< "$file_input")
# Display output of the test file
"$test_path" $contents > correctanswer 2>&1
# Display output of your file
"$your_path" $contents > youranswer 2>&1
# diff the solutions
if [ "$correctanswer" == "$youranswer" ]
then
echo "The two outputs were exactly the same "
else
echo "$divider"
echo "The two outputs were different "
diff youranswer correctanswer
echo "Do you wish to see the ouputs side-by-side?"
select yn in "Yes" "No"; do
case $yn in
Yes ) echo "LEFT: Your Output RIGHT: Solution Output"
sleep 1
vimdiff youranswer correctanswer; break;;
No ) exit;;
esac
done
fi
From the diff(1) man page:
Exit status is 0
if inputs are the same, 1 if different, 2 if trouble.
if diff -q file1 file2 &> /dev/null
echo same
else
echo different
fi
EDIT:
But if you insist on reading from more than one file at a time... don't.
while read -d '\t' correctanswer youranswer
do
...
done < <(paste correctfile yourfile)

Hanging bash loop script?

varrr=0
while read line
do
if [ $line -gt 500 -a $line -le 600 ]; then # for lines 501-600
echo $line >> 'file_out_${varrr}.ubi'
fi
done << 'file_in_${varrr}.ubi'
file_in_${varrr}.ubi is a text file with around 1000 lines. I want to print lines 501-600 to new file.
Running this code leaves my Ubuntu terminal with a > symbol on a new line, as if I need to type another command to finish the loop. I can' figure out what is wrong with this loop though. Seems like it's complete. See any mistakes I've made? Thanks.
I'm only going to answer your specific question: it's because you used a heredoc << symbol, instead of a redirection <. Your last line should read:
done < 'file_in_${varrr}.ubi'
(observe the single <).
But then you'll realize that you have some quoting problems. So, your last line should read:
done < "file_in_${varrr}.ubi"
(observe the double quotes ").
Similarly, watch out your quotings in line 6. You should have this instead:
echo "$line" >> "file_out_${varrr}.ubi"
(double quotes " for file_out_${varrr}.ubi).
But then, this will not behave as you expect... Maybe this will do:
varrr=0
linenb=0
while IFS= read -r line; do
((++linenb))
if ((linenb>500 && linenb<=600)); then # for lines 501-600
echo "$line" >> "file_out_${varrr}.ubi"
fi
done < "file_in_${varrr}.ubi"
Hope this helps!
If you just want to print lines from 501 to 600, why don't you use the following?
awk 'NR>=501 && NR<=600' file_in > file_out
awk 'NR==n' myfile prints the line n of the file myfile. Then, you can use ranges as I writted above.
You can simply use sed. It's the simplest tool for it and is cleaner and faster than a while loop with tests.
varrr=0
sed -n 501,600p "file_in_${varrr}.ubi" >> "file_out_${varrr}.ubi"
Or
varrr=0
sed -n 501,600p "file_in_${varrr}.ubi" > "file_out_${varrr}.ubi"
If you want to override existing data.
The mistake in your loop by the way is because you're not using a counter and comparing your line number by the line itself instead.
varrr=0
counter=0
while read line; do
(( ++counter ))
[[ counter -gt 500 && counter -le 600 ]] && echo "$line"
done < "file_in_${varrr}.ubi" > "file_out_${varrr}.ubi"
Noticeably you need to use < for input not << and place your variables around double quotes not single quotes.

bash script using multiple while loops and read line

I am trying to write a bash script to create some playlists of music. The part that has me stuck is the while loop for read line. I figure I am over thinking this so I turned to stackoverflow for assistance.
# The first while loop is how many playlists I want to create
i=1
while [ $i -le $plist ]
do
echo -e "iteration $i"
i=$[$i + 1]
z=0
# This while loop is for the length of time I want the playlist to be
while [ $z -le $TOTAL ]
do
echo -e "Count $z"
z=$[$z + xxx]
# This while loop is for reading the track list previously generated.
# It would read the line, calculate the track length,
# add to $z, cp the track to a folder
while read line
do
secs=$(metaflac --show-total-samples --show-sample-rate "$line" | tr '\n' ' '
| awk '{print $1/$2}' -)
z=$[$z + $secs]
cp $line to destination folder
done
done
done

Looping file contents in bash

I have a file /tmp/a.txt whose contents I want to read in a variable many number of times. If the EOF is reached then it should start from the beginning.
i.e. If the contents of the file is "abc" and I want to get 10 chars, it should be "abcabcabca".
For this I wrote an obvious script:
while [ 1 ];
do cat /tmp/a.txt;
done |
for i in {1..3};
do read -N 10 A;
echo "For $i: $A";
done
The only problem is that it hangs! I have no idea why it does so!
I am also open to other solutions in bash.
To repeat over and over a line you can :
yes "abc" | for i in {1..3}; do read -N 10 A; echo "for $i: $A"; done
yes will output 'forever', but then the for i in 1..3 will only execute the "do ... done;" part 3 times
yes add a "\n" after the string. If you don't want it, do:
yes "abc" | tr -d '\n' | for i in {1..3}; do read -N 10 A; echo "for $i: $A"; done
In all the above, note that as the read is after a pipe, in bash it will be in a subshell, so "$A" will only available in the "do....done;" area, and be lost after!
To loop and read from a file, and also not do that in a subshell:
for i in {1..3}; do read -N 10 A ; echo "for $i: $A"; done <$(cat /the/file)
To be sure there is enough data in /the/file, repeat at will:
for i in {1..3}; do read -N 10 A ; echo "for $i: $A"; done <$(cat /the/file /the/file /the/file)
To test the latest: echo -n "abc" > /the/file (-n, so there is no trainling newline)
The script hangs because of the first loop. After the three iterations of the second loop (for) are done, the first loop repeatedly starts new cat instances which read the file and then write the content abc to the pipe. The write to the pipe doesn't work any more in the later iterations. Yes, there is a SIGPIPE kill, but to the cat command and not to the loop itself. So the solution is to catch the error in the right place:
while [ 1 ];
do cat /tmp/a.txt || break
done |
for i in {1..3};
do read -N 10 A;
echo "For $i: $A";
done
Besides: output is following:
For 1: abcabcabca
For 2: bcabcabcab
For 3: cabcabcabc
<-- (Here the shell hangs no more)

Resources