Reading multiple lines to redirect - bash

currently I have a file named testcase and inside that file has 5 10 15 14 on line one and
10 13 18 22 on line two
I am trying to bash script to take those two inputs line by line to test into a program. I have the while loop comment out but I feel like that is going in the right direction.
I was also wondering if is possible to know if I diff two files and they are the same return true or something like that because I dont now if [["$youranswer" == "$correctanswer"]] is working the way I wanted to. I wanted to check if two contents inside the files are the same then do a certain command
#while read -r line
#do
# args+=$"line"
#done < "$file_input"
# Read contents in the file
contents=$(< "$file_input")
# Display output of the test file
"$test_path" $contents > correctanswer 2>&1
# Display output of your file
"$your_path" $contents > youranswer 2>&1
# diff the solutions
if [ "$correctanswer" == "$youranswer" ]
then
echo "The two outputs were exactly the same "
else
echo "$divider"
echo "The two outputs were different "
diff youranswer correctanswer
echo "Do you wish to see the ouputs side-by-side?"
select yn in "Yes" "No"; do
case $yn in
Yes ) echo "LEFT: Your Output RIGHT: Solution Output"
sleep 1
vimdiff youranswer correctanswer; break;;
No ) exit;;
esac
done
fi

From the diff(1) man page:
Exit status is 0
if inputs are the same, 1 if different, 2 if trouble.
if diff -q file1 file2 &> /dev/null
echo same
else
echo different
fi
EDIT:
But if you insist on reading from more than one file at a time... don't.
while read -d '\t' correctanswer youranswer
do
...
done < <(paste correctfile yourfile)

Related

Nested while loop not working in Bash

Beginner here so bear with me. I am trying to compare homework submissions from a solution file and a student submission file. The contents of each file have three problems, one per line:
problem 1 code
problem 2 code
problem 3 code
I want to compare each line in the solution with the corresponding line in the students submission. I am using a for loop to run through each student file and a nested while loop to run through each line of the solution file and student file. For some reason the script is completely ignoring the while loop. I have put echoes between each line to see where the problem is(the echo $solution and echo $submission is just to check to see if the path is correct):
for submission in /home/myfolder/submissions/*
do
echo 1
solution=$(echo /home/myfolder/hwsolution/*)
echo 2
echo $solution
echo $submission
while read sans <&1 && read sol <&2
do
echo 3
echo Student awnser is: $sans
echo Solution is: $sol
echo 4
done 1<$(echo $submission) 2<$(echo $(echo $solution))
echo 5
done
When I run it I get:
1
2
/home/myfolder/hwsolution/solution
/home/myfolder/submissions/student1
5
1
2
/home/myfolder/hwsolution/solution
/home/myfolder/submissions/student2
5
1
2
/home/myfolder/hwsolution/solution
/home/myfolder/submissions/student3
5
It's not ignoring the while loop -- you're redirecting the file descriptors used for stdout and stderr, so echo can't write to the console within it.
for submission in /home/myfolder/submissions/*; do
solutions=( /home/myfolder/hwsolution/* )
if (( ${#solutions[#]} == 1 )) && [[ -e ${solutions[0]} ]]; then
solution=${solutions[0]}
else
echo "Multiple solution files found; don't know which to use" >&2
printf ' - %q\n' "${solutions[#]}" >&2
exit
fi
while read sans <&3 && read sol <&4; do
echo "Student awnser is: $sans"
echo "Solution is: $sol"
done 3<"$submission" 4<"$solution"
done
The most immediate change is that we're redirecting FD3 and FD4, not FD1 and FD2.

Bash command to see if a specific line of a file is empty

I'm trying to fix a bash script by adding in some error catching. I have a file (list.txt) that normally has content like this:
People found by location:
person: john [texas]
more info on john
Sometimes that file gets corrupted, and it only has that first line:
People found by location:
I'm trying to find a method to check that file to see if any data exists on line 2, and I want to include it in my bash script. Is this possible?
Simple and clean:
if test $(sed -n 2p < /path/to/file); then
# line 2 exists and it is not blank
else
# otherwise...
fi
With sed we extract the second line only. The test expression will evaluate to true only if there is a second non-blank line in the file.
I assume that you want to check whether line 2 of a given file contains any data or not.
[ "$(sed -n '2p' inputfile)" != "" ] && echo "Something present on line 2" || echo "Line 2 blank"
This would work even if the inputfile has just one line.
If you simply want to check whether the inputfile has one line or more, you can say:
[ "$(sed -n '$=' z)" == "1" ] && echo "Only one line" || echo "More than one line"
Sounds like you want to check if your file has more than 1 line
if (( $(wc -l < filename) > 1 )); then
echo I have a 2nd line
fi
Another approach which doesn't require external commands is:
if ( IFS=; read && read -r && [[ -n $REPLY ]]; ) < /path/to/file; then
echo true
else
echo false
fi

Comparison function that compares two text files in Unix

I was wondering if anyone could tell me if there is a function available in unix, bash that compares all of the lines of the files. If they are different it should output true/false, or -1,0,1. I know these cmp functions exist in other languages. I have been looking around the man pages but have been unsuccessful. If it is not available, could someone help me come up with an alternative solution?
Thanks
There are several ways to do this:
cmp -s file1 file2: Look at the value of $?. Zero if both files match or non-zero otherwise.
diff file1 file2 > /dev/null: Some forms of the diff command can take a parameter that tells it not to output anything. However, most don't. After all, you use diff to see the differences between two files. Again, the exit code (you can check the value of $? will be 0 if the files match and non-zero otherwise.
You can use these command in a shell if statement:
if cmp -s file1 file2
then
echo "The files match"
else
echo "The files are different"
fi
The diff command is made specifically for text files. The cmp command should work with all binary files too.
There is a simple cmp file file command that does just that. It returns 0 if they are equal and 1 if they are different, so it's trivial to use in ifs:
if cmp file1 file1; then
...
fi
Hope this helps =)
#!/bin/bash
file1=old.txt
file2=new.txt
echo " TEST 1 : "
echo
if [ $( cmp -s ${file1} ${file2}) ]
then
echo "The files match : ${file1} - ${file2}"
else
echo "The files are different : ${file1} - ${file2}"
fi
echo
echo " TEST 2 : "
echo
bool=$(cmp -s "$file1" "$file2" )
if cmp -s "$file1" "$file2"
then
echo "The files match"
else
echo "The files are different"
fi
echo
echo " TEST 3 : md5 / md5sum - compute and check MD5 message digest"
echo
md1=$(md5 ${file1});
md2=$(md5 ${file2});
mdd1=$(echo $md1 | awk '{print $4}' )
mdd2=$(echo $md2 | awk '{print $4}' )
# or md5sum depends on your linux flavour :D
#md1=$(md5sum ${file1});
#md2=$(md5sum ${file2});
#mdd1=$(echo $md1 | awk '{print $1}' )
#mdd2=$(echo $md2 | awk '{print $1}' )
echo $md1
echo $mdd1
echo $md2
echo $mdd2
echo
#if [ $mdd1 = $mdd2 ];
if [ $mdd1 -eq $mdd2 ];
then
echo "The files match : ${file1} - ${file2}"
else
echo "The files are different : ${file1} - ${file2}"
fi
You could do an md5 on the two files, then compare the results in bash.
No Unix box here to test, but this should be right.
#!/bin/bash
md1=$(md5 file1);
md2=$(md5 file2);
if [ $md1 -eq $ $md2 ]; then
echo The same
else
echo Different
fi
echo "read first file"
read f1
echo "read second file"
read f2
diff -s f1 f2 # prints if both files are identical

Bash script to automatically test program output - C

I am very new to writing scripts and I am having trouble figuring out how to get started on a bash script that will automatically test the output of a program against expected output.
I want to write a bash script that will run a specified executable on a set of test inputs, say in1 in2 etc., against corresponding expected outputs, out1, out2, etc., and check that they match. The file to be tested reads its input from stdin and writes its output to stdout. So executing the test program on an input file will involve I/O redirection.
The script will be invoked with a single argument, which will be the name of the executable file to be tested.
I'm having trouble just getting going on this, so any help at all (links to any resources that further explain how I could do this) would be greatly appreciated. I've obviously tried searching myself but haven't been very successful in that.
Thanks!
If I get what you want; this might get you started:
A mix of bash + external tools like diff.
#!/bin/bash
# If number of arguments less then 1; print usage and exit
if [ $# -lt 1 ]; then
printf "Usage: %s <application>\n" "$0" >&2
exit 1
fi
bin="$1" # The application (from command arg)
diff="diff -iad" # Diff command, or what ever
# An array, do not have to declare it, but is supposedly faster
declare -a file_base=("file1" "file2" "file3")
# Loop the array
for file in "${file_base[#]}"; do
# Padd file_base with suffixes
file_in="$file.in" # The in file
file_out_val="$file.out" # The out file to check against
file_out_tst="$file.out.tst" # The outfile from test application
# Validate infile exists (do the same for out validate file)
if [ ! -f "$file_in" ]; then
printf "In file %s is missing\n" "$file_in"
continue;
fi
if [ ! -f "$file_out_val" ]; then
printf "Validation file %s is missing\n" "$file_out_val"
continue;
fi
printf "Testing against %s\n" "$file_in"
# Run application, redirect in file to app, and output to out file
"./$bin" < "$file_in" > "$file_out_tst"
# Execute diff
$diff "$file_out_tst" "$file_out_val"
# Check exit code from previous command (ie diff)
# We need to add this to a variable else we can't print it
# as it will be changed by the if [
# Iff not 0 then the files differ (at least with diff)
e_code=$?
if [ $e_code != 0 ]; then
printf "TEST FAIL : %d\n" "$e_code"
else
printf "TEST OK!\n"
fi
# Pause by prompt
read -p "Enter a to abort, anything else to continue: " input_data
# Iff input is "a" then abort
[ "$input_data" == "a" ] && break
done
# Clean exit with status 0
exit 0
Edit.
Added exit code check; And a short walk trough:
This will in short do:
Check if argument is given (bin/application)
Use an array of "base names", loop this and generate real filenames.
I.e.: Having array ("file1" "file2") you get
In file: file1.in
Out file to validate against: file1.out
Out file: file1.out.tst
In file: file2.in
...
Execute application and redirect in file to stdin for application by <, and redirect stdout from application to out file test by >.
Use a tool like i.e. diff to test if they are the same.
Check exit / return code from tool and print message (FAIL/OK)
Prompt for continuance.
Any and all of which off course can be modified, removed etc.
Some links:
TLDP; Advanced Bash-Scripting Guide (can be a bit more readable with this)
Arrays
File test operators
Loops and branches
Exit-status
...
bash-array-tutorial
TLDP; Bash-Beginners-Guide
Expect could be a perfect fit for this kind of problem:
Expect is a tool primarily for automating interactive applications
such as telnet, ftp, passwd, fsck, rlogin, tip, etc. Expect really
makes this stuff trivial. Expect is also useful for testing these same
applications.
First take a look at the Advanced Bash-Scripting Guide chapter on I/O redirection.
Then I have to ask Why use a bash script at all? Do it directly from your makefile.
For instance I have a generic makefile containing something like:
# type 'make test' to run a test.
# for example this runs your program with jackjill.txt as input
# and redirects the stdout to the file jackjill.out
test: $(program_NAME)
./$(program_NAME) < jackjill.txt > jackjill.out
./diff -q jackjill.out jackjill.expected
You can add as many tests as you want like this. You just diff the output file each time against a file containing your expected output.
Of course this is only relevant if you're actually using a makefile for building your program. :-)
Functions. Herestrings. Redirection. Process substitution. diff -q. test.
Expected outputs are a second kind of input.
For example, if you want to test a square function, you would have input like (0, 1, 2, -1, -2) and expected output as (0, 1, 4, 1, 4).
Then you would compare every result of input to the expected output and report errors for example.
You could work with arrays:
in=(0 1 2 -1 -2)
out=(0 1 4 2 4)
for i in $(seq 0 $((${#in[#]}-1)) )
do
(( ${in[i]} * ${in[i]} - ${out[i]} )) && echo -n bad" " || echo -n fine" "
echo $i ": " ${in[i]}"² ?= " ${out[i]}
done
fine 0 : 0² ?= 0
fine 1 : 1² ?= 1
fine 2 : 2² ?= 4
bad 3 : -1² ?= 2
fine 4 : -2² ?= 4
Of course you can read both arrays from a file.
Testing with (( ... )) can invoke arithmetic expressions, strings and files. Try
help test
for an overview.
Reading strings wordwise from a file:
for n in $(< f1); do echo $n "-" ; done
Read into an array:
arr=($(< file1))
Read file linewise:
for i in $(seq 1 $(cat file1 | wc -l ))
do
line=$(sed -n ${i}p file1)
echo $line"#"
done
Testing against program output sounds like string comparison and capturing of program output n=$(cmd param1 param2):
asux:~/prompt > echo -e "foo\nbar\nbaz"
foo
bar
baz
asux:~/prompt > echo -e "foo\nbar\nbaz" > file
asux:~/prompt > for i in $(seq 1 3); do line=$(sed -n ${i}p file); test "$line" = "bar" && echo match || echo fail ; done
fail
match
fail
Further usesful: Regular expression matching on Strings with =~ in [[ ... ]] brackets:
for i in $(seq 1 3)
do
line=$(sed -n ${i}p file)
echo -n $line
if [[ "$line" =~ ba. ]]; then
echo " "match
else echo " "fail
fi
done
foo fail
bar match
baz match

For Loop in Shell Script - add breakline in csv file

I'm trying to use a for loop in a shell script.
I am executing a command from a text file. I wish to execute each command 10 times and insert some stats into a csv file. After that command has been done, I want to start the next BUT put a line break in the CSV file after the first command that was done 10 times.
Is the following correct:
#Function processLine
processLine(){
line="$#"
for i in 1 2 3 4 5 6 7 8 9 10
do
START=$(date +%s.%N)
echo "$line"
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
done
}
Thanks all for any help
UPDATE
It is doing the loop correctly, but I can't get it to add a line break after each command is executed 10 times.
processLine()
{
line="$#"
echo $line >> test_file
for ((i = 1; i <= 10 ; i++))
do
# do not move to the next line
echo -n "$i," >> test_file
done
# move to the next line: add the break
echo >> test_file
}
echo -n > test_file
processLine 'ls'
processLine 'find . -name "*"'
How about just adding a line "echo >> file.csv" after done? Or do you only want an empty line between each block of 10? Then you could do the following:
FIRST=1
processline()
{
if (( FIRST )) then
FIRST = 0
else
echo >> file.csv
fi
...rest of code...
}
Otherwise you might want to give an example of the desired output and the output you are getting now.
It looks reasonable. Does it do what you want it to?
You could simplify some things, e.g.
DIFF=$(echo "$END - $START" | bc)
could be just
DIFF=$((END - START))
if END and START are integers, and there's no need to put things in variables if you're only going to use them once.
If it's not doing what you want, edit the question to describe the problem (what you see it doing and what you'd rather have it do).

Resources