Bash: Command not found inside for loop - bash

I'm trying to iterate through a list of folder names, and perform some operations on the name, but whatever I try to do inside the while loop, results in a "Command not found".
For example, the following code:
#!/bin/bash
C=$(echo "ABCDEF" | cut -c1)
R=$(echo "ABCDEF" | sed "s/A/X/g")
echo $C
echo $R
for PATH in $(find . -maxdepth 1 -type d); do
C=$(echo $PATH | cut -c1)
R=$(echo $PATH | sed "s/A/X/g")
echo $C
done
Outputs:
A
XBCDEF
line 9: cut: command not found
line 10: sed: command not found

PATH is a special variable that tells the shell where to find common utilities. For instance, sed and cut are usually in /bin and $PATH usually includes /bin.
So, in your for loop, you've redefined $PATH to be the result of your find operation. You'll have better luck if you use a variable name other than PATH.

Related

syntax error not sure what its saying or how to fix it

I am trying to write a shell script for school that searches your entire home directory for all files with the .java extension. For each such file, list the number of lines in the file along with its location (that is, its full path).
my script looks like
#!/bin/bash
total=0
for currfile in $(find ~ -name "*.java" -print)
do
total=$[total+($(wc -l $currfile| awk '{print $1}'))]
echo -n 'total=' $total
echo -e -n '\r'
done
echo 'total=' $total
when i run it from the konsole i get error
./fileQuest.sh: line 5: total+(): syntax error: operand expected (error token is ")")
I am a novice and cannot figure out what the error is telling me. Any help would be appreciated
total+()
This is the expression that's being evaluated inside of $[...]. Notice that the parentheses are empty. There should be a number there. It indicates that the $(wc | awk) bit is yielding an empty string.
total=$[total+($(wc -l $currfile| awk '{print $1}'))]
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If that part is blank then you get:
total=$[total+()]
Note that wc can handle multiple file names natively. You don't need to write your own loop. You could use find -exec to call it directly instead.
find ~ -name "*.java" -exec wc {} +

SHELL printing just right part after . (DOT)

I need to find just extension of all files in directory (if there are 2 same extensions, its just one). I already have it. But the output of my script is like
test.txt
test2.txt
hello.iso
bay.fds
hellllu.pdf
Im using grep -e -e '.' and it just highlight DOTs
And i need just these extensions give in one variable like txt,iso,fds,pdf
Is there anyone who could help? I already had it one time but i had it on array. Today I found out It's has to work on dash too.
You can use find with awk to get all unique extensions:
find . -type f -name '?*.?*' -print0 |
awk -F. -v RS='\0' '!seen[$NF]++{print $NF}'
can be done with find as well, but I think this is easier
for f in *.*; do echo "${f##*.}"; done | sort -u
if you want to assign a comma separated list of the unique extensions, you can follow this
ext=$(for f in *.*; do echo "${f##*.}"; done | sort -u | paste -sd,)
echo $ext
csv,pdf,txt
alternatively with ls
ls -1 *.* | rev | cut -d. -f1 | rev | sort -u | paste -sd,
rev/rev is required if you have more than one dot in the filename, assuming the extension is after the last dot. For any other directory simply change the part *.* to dirpath/*.* in all scripts.
I'm not sure I understand your comment. If you don't assign to a variable, by default it will print to the output. If you want to pass directory name as a variable to a script, put the code into a script file and replace dirpath with $1, assuming that will be your first argument to the script
#!/bin/bash
# print unique extension in the directory passed as an argument, i.e.
ls -1 "$1"/*.* ...
if you have sub directories with extensions above scripts include them as well, to limit only to file types replace ls .. with
find . -maxdepth 1 -type f -name "*.*" | ...

How to split the contents of `$PATH` into distinct lines?

Suppose echo $PATH yields /first/dir:/second/dir:/third/dir.
Question: How does one echo the contents of $PATH one directory at a time as in:
$ newcommand $PATH
/first/dir
/second/dir
/third/dir
Preferably, I'm trying to figure out how to do this with a for loop that issues one instance of echo per instance of a directory in $PATH.
echo "$PATH" | tr ':' '\n'
Should do the trick. This will simply take the output of echo "$PATH" and replaces any colon with a newline delimiter.
Note that the quotation marks around $PATH prevents the collapsing of multiple successive spaces in the output of $PATH while still outputting the content of the variable.
As an additional option (and in case you need the entries in an array for some other purpose) you can do this with a custom IFS and read -a:
IFS=: read -r -a patharr <<<"$PATH"
printf %s\\n "${patharr[#]}"
Or since the question asks for a version with a for loop:
for dir in "${patharr[#]}"; do
echo "$dir"
done
How about this:
echo "$PATH" | sed -e 's/:/\n/g'
(See sed's s command; sed -e 'y/:/\n/' will also work, and is equivalent to the tr ":" "\n" from some other answers.)
It's preferable not to complicate things unless absolutely necessary: a for loop is not needed here. There are other ways to execute a command for each entry in the list, more in line with the Unix Philosophy:
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.
such as:
echo "$PATH" | sed -e 's/:/\n/g' | xargs -n 1 echo
This is functionally equivalent to a for-loop iterating over the PATH elements, executing that last echo command for each element. The -n 1 tells xargs to supply only 1 argument to it's command; without it we would get the same output as echo "$PATH" | sed -e 'y/:/ /'.
Since this uses xargs, which has built-in support to split the input, and echoes the input if no command is given, we can write that as:
echo -n "$PATH" | xargs -d ':' -n 1
The -d ':' tells xargs to use : to separate it's input rather than a newline, and the -n tells /bin/echo to not write a newline, otherwise we end up with a blank trailing line.
here is another shorter one:
echo -e ${PATH//:/\\n}
You can use tr (translate) to replace the colons (:) with newlines (\n), and then iterate over that in a for loop.
directories=$(echo $PATH | tr ":" "\n")
for directory in $directories
do
echo $directory
done
My idea is to use echo and awk.
echo $PATH | awk 'BEGIN {FS=":"} {for (i=0; i<=NF; i++) print $i}'
EDIT
This command is better than my former idea.
echo "$PATH" | awk 'BEGIN {FS=":"; OFS="\n"} {$1=$1; print $0}'
If you can guarantee that PATH does not contain embedded spaces, you can:
for dir in ${PATH//:/ }; do
echo $dir
done
If there are embedded spaces, this will fail badly.
# preserve the existing internal field separator
OLD_IFS=${IFS}
# define the internal field separator to be a colon
IFS=":"
# do what you need to do with $PATH
for DIRECTORY in ${PATH}
do
echo ${DIRECTORY}
done
# restore the original internal field separator
IFS=${OLD_IFS}

bash uses only first entry from find

I'm trying to list all PDF files under a given directory $1 (and its subdirectories), get the number of pages in each file and calculate two numbers using the pagecount. My script used to work, but only on filenames that don't contain spaces and only in one directory that is only filled with PDF files. I've modified it a bit already (using quotes around variables and such), but now I'm a bit stuck.
The problem I'm having is that, as it is now, the script only processes the first file found by find . -name '*.pdf'. How would I go about processing the rest?
#!/bin/bash
wd=`pwd`
pppl=0.03 #euro
pppnl=0.033 #eruo
cd $1
for entry in "`find . -name '*.pdf'`"
do
filename="$(basename "$entry")"
pagecount=`pdfinfo "$filename" | grep Pages | sed 's/[^0-9]*//'`
pricel=`echo "$pagecount * $pppl" | bc`
pricenl=`echo "$pagecount * $pppnl" | bc`
echo -e "$filename\t\t$pagecount\t$pricel\t$pricenl"
done
cd "$wd"
The problem with using find in a for loop, is that if you don't quote the command, the filenames with spaces will be split, and if you do quote the command, then the entire results will be parsed in a single iteration.
The workaround is to use a while loop instead, like this:
find . -name '*.pdf' -print0 | while IFS= read -r -d '' entry
do
....
done
Read this article for more discussion: http://mywiki.wooledge.org/ParsingLs
It's a bad idea to use word splitting. Use a while loop instead.
while read -r entry
do
filename=$(basename "$entry")
pagecount=$(pdfinfo "$filename" | grep Pages | sed 's/[^0-9]*//')
pricel=$(echo "$pagecount * $pppl" | bc)
pricenl=$(echo "$pagecount * $pppnl" | bc)
echo -e "$filename\t\t$pagecount\t$pricel\t$pricenl"
done < <(exec find . -name '*.pdf')
Also prefer $() over backticks when possible. You also don't need to place around "" variables or command substitutions when they are being used for assignment.
filename=$(basename "$entry")
As well could simply be just
filename=${entry##*/}

How can I get the output of a command into a bash variable?

I can't remember how to capture the result of an execution into a variable in a bash script.
Basically I have a folder full of backup files of the following format:
backup--my.hostname.com--1309565.tar.gz
I want to loop over a list of all files and pull the numeric part out of the filename and do something with it, so I'm doing this so far:
HOSTNAME=`hostname`
DIR="/backups/"
SUFFIX=".tar.gz"
PREFIX="backup--$HOSTNAME--"
TESTNUMBER=9999999999
#move into the backup dir
cd $DIR
#get a list of all backup files in there
FILES=$PREFIX*$SUFFIX
#Loop over the list
for F in $FILES
do
#rip the number from the filename
NUMBER=$F | sed s/$PREFIX//g | sed s/$SUFFIX//g
#compare the number with another number
if [ $NUMBER -lg $TESTNUMBER ]
#do something
fi
done
I know the "$F | sed s/$PREFIX//g | sed s/$SUFFIX//g" part rips the number correctly (though I appreciate there might be a better way of doing this), but I just can't remember how to get that result into NUMBER so I can reuse it in the if statement below.
Use the $(...) syntax (or ``).
NUMBER=$( echo $F | sed s/$PREFIX//g | sed s/$SUFFIX//g )
or
NUMBER=` echo $F | sed s/$PREFIX//g | sed s/$SUFFIX//g `
(I prefer the first one, since it is easier to see when multiple ones nest.)
Backticks if you want to be portable to older shells (sh):
NUMBER=`$F | sed s/$PREFIX//g | sed s/$SUFFIX//g`.
Otherwise, use NUMBER=$($F | sed s/$PREFIX//g | sed s/$SUFFIX//g). It's better and supports nesting more readily.

Resources