Reading in File line by line w/ Bash - bash

I'm creating a bash script to read a file in line by line, that is formatted later to be organised by name and then date. I cannot see why this code isn't working at this time though no errors show up even though I have tried with the input and output filename variables on their own, with a directory finder and export command.
export inputfilename="employees.txt"
export outputfilename="branch.txt"
directoryinput=$(find -name $inputfilename)
directoryoutput=$(find -name $outputfilename)
n=1
if [[ -f "$directoryinput" ]]; then
while read line; do
echo "$line"
n=$((n+1))
done < "$directoryoutput"
else
echo "Input file does not exist. Please create a employees.txt file"
fi
All help is very much appreciated, thank you!
NOTE: As people noticed, I forgot to add in the $ sign on the data transfer to file, but it was just in copying my code, I do have the $ sign in my actual application and still no result

Reading in File line by line w/ Bash
The best and idiomatic way to read file line by line is to do:
while IFS= read -r line; do
// parse line
printf "%s" "$line"
done < "file"
More on this topic can be found on bashfaq
However don't read files in bash line by line. You can (ok, almost) always not read a stream line by line in bash. Reading a file line by line in bash is extremely slow and shouldn't be done. For simple cases all the unix tools with the help of xargs or parallel can be used, for more complicated awk and datamesh are used.
done < "directoryoutput"
The code is not working, because you are passing to your while read loop as input to standard input the content of a file named directoryoutput. As such a file does not exists, your script fails.
directoryoutput=$(find -name $outputfilename)
One can simply append the variable value with newline appended to a read while loop using a HERE-string construction:
done <<< "$directoryoutput"
directoryinput=$(find -name $inputfilename)
if [[ -f "$directoryinput" ]]
This is ok as long as you have only one file named $inputfilename in your directory. Also it makes no sense to find a file and then check for it's existance. In case of more files, find return a newline separated list of names. However a small check if [ "$(printf "$directoryinput" | wc -l)" -eq 1 ] or using find -name $inputfilename | head -n1 I think would be better.
while read line;
do
echo "$line"
n=$((n+1))
done < "directoryoutput"
The intention is pretty clear here. This is just:
n=$(<directoryoutput wc -l)
cat "directoryoutput"
Except that while read line removed trailing and leading newlines and is IFS dependent.
Also always remember to quote your variables unless you have a reason not to.
Have a look at shellcheck which can find most common mistakes in scripts.
I would do it more like this:
inputfilename="employees.txt"
outputfilename="branch.txt"
directoryinput=$(find . -name "$inputfilename")
directoryinput_cnt=$(printf "%s\n" "$directoryinput" | wc -l)
if [ "$directoryinput_cnt" -eq 0 ]; then
echo "Input file does not exist. Please create a '$inputfilename' file" >&2
exit 1
elif [ "$directoryinput_cnt" -gt 1 ]; then
echo "Multiple file named '$inputfilename' exists in the current path" >&2
exit 1
fi
directoryoutput=$(find . -name "$outputfilename")
directoryoutput_cnt=$(printf "%s\n" "$directoryoutput" | wc -l)
if [ "$directoryoutput_cnt" -eq 0 ]; then
echo "Input file does not exist. Please create a '$outputfilename' file" >&2
exit 1
elif [ "$directoryoutput_cnt" -gt 1 ]; then
echo "Multiple file named '$outputfilename' exists in the current path" >&2
exit 1
fi
cat "$directoryoutput"
n=$(<"$directoryoutput" wc -l)

Related

Bash script trying to list files unsuccessfully

I'm reading some file paths and names from a text file and trying to test if file exists. I'm not sure what I'm doing wrong but first echo returns filepath and file name whilst the echo inside the if statement doesn't. Any ideas?
#!/bin/bash
while read line; do
echo $line
if [ -f "$line" ]; then
echo "found: $line"
fi
done < /mbackup/temp/images.txt
The only change is adding the -r option to read. That option is documented as:
Backslash does not act as an escape character. The backslash is considered to be part of the line. In particular, a backslash-newline pair may not then be used as a line continuation.
This helps prevent special characters in file names from interfering with your script.
I test this with files containing special characters and it works as you expected.
#!/bin/bash
while read -r line; do
echo $line
if [ -f "$line" ]; then
echo "found: $line"
fi
done < /mbackup/temp/images.txt

Syntax error: "then" unexpected (expecting "done")

Here is my problem statement.
Write a shell script that takes a name of a folder as a command line argument, and produce a file that contains the names of all sub folders with size 0 (that is empty sub folders)
This is my shell script.
ls $1
while read folder
do
files = 'ls $folder | wc -l'
if[$files -eq 0];
then
echo "$folder">>output.txt
echo "File deleted"
else
echo "File is not empty"
fi
done
When I execute my command (using 'sh filename'), it shows syntax error!
Syntax error: "then" unexpected (expecting "done")
Is there any wrong with my script?
Don't forget, in shell [ is a binary that take parameters and return true or false (0 or 1).
if is a keyword that verifies the return of next binary called is true (0).
So, when you do
if[$files -eq 0]
Your shell understand nothing because it try to launch the if[2 programm, and he find a then after without detecting the if.
For fix your problem, you have to put a space after your if and after the [ because binary must have a space between between his name and their arguments.
ls $1
while read folder
do
files = `ls $folder | wc -l`
if [ $files -eq 0 ]
then
echo "$folder">>output.txt
echo "File deleted"
else
echo "File is not empty"
fi
done
Try something like this
ls $1
while read folder
do
files=`ls $folder | wc -l`
if [ $files -eq 0 ]; then
echo "$folder">>output.txt
echo "File deleted"
else
echo "File is not empty"
fi
done
Notice no space files=.., and there is `(back tick) not '(single quote)
Notice space between 'if' and '[' ...
There may be spacing error:
Just do 2 steps:
run hexdump -C yourscript.sh
run cat yourscript.sh | tr -d '\r' >> yournewscript.sh
it will create new correct file then run new file.
You've already got answers describing how your existing script needs to be fixed:
no spaces around the = when you set the $files variable,
backquotes instead of single ticks for your command substitution,
a space after if, and spaces around the parts of the conditional expression.
Your script suffers from the Parsing LS issue, in that filenames may be treated badly if they contain special characters like newlines. While you may think this isn't a big issue when all you want to do is check for the existence or nonexistence of files (i.e. count == 0), but the way you're doing it is still cumbersome, and encourages bad habits.
How about, instead consider:
while read folder; do
files=0
for files in $folder/*; do
files=1
break
done
if [ $files -eq 0 ]; then
echo "$folder" >> output.txt
else
echo "not empty: $folder" >&2
fi
done
Instead of counting files in a command substitution and pipe, this uses a for loop to set a semaphore if any files exist. This will always be faster.
Note that this is POSIX-compliant. If your shell is a more advanced one, like bash or zsh, you have more elegant/efficient options available.

for loop in UNIX Script is not working

for i in `cat ${DIR}/${INDICATOR_FLIST}`
do
X=`ls $i|wc -l`
echo $X
echo $X>>txt.txt
done
I have a code like this to check if file is present in a directory or not
but this is not working and gives error like this:
not foundtage001/dev/BadFiles/RFM_DW/test_dir/sales_crdt/XYZ.txt
You can see there is no space between not found and file path.
I have a code like this to check if file is present in a directory or not ...
It seems that you're trying to read a list of files from ${DIR}/${INDICATOR_FLIST} and then trying to determine if those actually exist. The main problem is:
You're trying to parse ls in order to figure whether the file exists.
This is what results in the sort of output that you see; mix of stderr and stdout. Use the test operator instead as given below.
The following would give tell you whether the file exists or not:
while read line; do
[ -e "${line}" ] && echo "${line} exists" || echo "${line} does not exist";
done < ${DIR}/${INDICATOR_FLIST}
Your file ${INDICATOR_FLIST} has CRLF line terminators (DOS-style). You need to strip out the CR characters, as Unix convention is for LF-only line terminators.
You can tell this by the way "not found" is printed at the start of the line. The immediately preceding character (the last char of the filename) is a CR, which sends the cursor back to the start of the line.
Find a dos2unix utility, or run tr -d \\015 over it (this deletes all CR's indiscriminately).
maybe the location of your file isn't ok.
take this example
m:~ tr$ echo "1 2 3 4 5" > file.txt
m:~ tr$ cat file.txt
1 2 3 4 5
m:~ tr$ for i in `cat file.txt`;do echo $i ;done
1
2
3
4
5
m:~ tr$
you could write the file before "for" and maybe check if the file exists:
echo "location : ${DIR}/${INDICATOR_FLIST}"
if [ -e ${DIR}/${INDICATOR_FLIST} ];then echo "file exists ";else echo "file was not found";fi
for i in `cat ${DIR}/${INDICATOR_FLIST}`;
do
let "X=`ls $i|wc -l`";
echo $X;
echo $X>>txt.txt;
done
The proper syntax is:
for i in `cat data-file`
do
echo $I
done
which you are following, therefore the problem must be in the specification of $DIR and $INDICATOR_LIST so therefore double check the location of your file.

shell script: if statement

I'm following the tutorial here: http://bash.cyberciti.biz/guide/If..else..fi#Number_Testing_Script
My script looks like:
lines=`wc -l $var/customize/script.php`
if test $lines -le 10
then
echo "script has less than 10 lines"
else
echo "script has more than 10 lines"
fi
but my output looks like:
./boot.sh: line 33: test: too many arguments
script has more than 10 lines
Why does it say I have too many arguments? I fail to see how my script is different from the one in the tutorial.
wc -l file command will print two words. Try this:
lines=`wc -l file | awk '{print $1}'`
To debug a bash script (boot.sh), you can:
$ bash -x ./boot.sh
It'll print every line executed.
wc -l file
outputs
1234 file
use
lines=`wc -l < file`
to get just the number of lines. Also, some people prefer this notation instead of backticks:
lines=$(wc -l < file)
Also, since we don't know if $var contains spaces, and if the file exists:
fn="$var/customize/script.php"
if test ! -f "$fn"
then
echo file not found: $fn
elif test $(wc -l < "$fn") -le 10
then
echo less than 11 lines
else
echo more than 10 lines
fi
Also, you should use
if [[ $lines -gt 10 ]]; then
something
else
something
fi
test condition is really outdated, and so is it's immediate successor, [ condition ], mainly because you have to be really careful with those forms. For example, you must quote any $var you pass to test or [ ], and there are other details that get hairy. (tests are treated in every way as any other command). Check out this article for some details.

shell script to count readable files

How can I write a shell script file_readable which:
accepts some number of names as arguments,
checks each name to see if it is a regular file and readable, and
outputs a count of the number of such files.
For example:
$ sh file_readable /etc/fstab /etc/ssh/ssh_host_rsa_key /etc/does-not-exist
1
Of these, only /etc/fstab is likely to both exist and be readable.
So far I have put this together but it does not work correctly - can anybody help me please ?:
#!/bin/sh
for filename in "$#"
do
if test -f "$filename"
then echo | wc -l
else echo $?
fi
done
then echo | wc -l
If file exists and is a regular you print number of lines in empty string plus "\n", which is equal 1 always. Sound not quite usable, isn't it?
All you need is incrementing some counter and printing it in the end.
#!/bin/sh
readable_files=0
for filename in "$#"; do
if test -f "$filename"; then
readable_files=$(( readable_files + 1 ))
fi
done
echo "${readable_files}"

Resources