Suppress output to StdOut when piping echo - bash

I'm making a bash script that crawls through a directory and outputs all files of a certain type into a text file. I've got that working, it just also writes out a bunch of output to console I don't want (the names of the files)
Here's the relevant code so far, tmpFile is the file I'm writing to:
for DIR in `find . -type d` # Find problem directories
do
for FILE in `ls "$DIR"` # Loop through problems in directory
do
if [[ `echo ${FILE} | grep -e prob[0-9]*_` ]]; then
`echo ${FILE} >> ${tmpFile}`
fi
done
done
The files I'm putting into the text file are in the format described by the regex prob[0-9]*_ (something like prob12345_01)
Where I pipe the output from echo ${FILE} into grep, it still outputs to stdout, something I want to avoid. I think it's a simple fix, but it's escaping me.

All this can be done in one single find command. Consider this:
find . -type f -name "prob[0-9]*_*" -exec echo {} >> ${tmpFile} \;
EDIT:
Even simpler: (Thanks to #GlennJackman)
find . -type f -name "prob[0-9]*_*" >> $tmpFile

To answer your specific question, you can pass -q to grep for silent output.
if echo "hello" | grep -q el; then
echo "found"
fi
But since you're already using find, this can be done with just one command:
find . -regex ".*prob[0-9]*_.*" -printf '%f\n' >> ${tmpFile}
find's regex is a match on the whole path, which is why the leading and trailing .* is needed.
The -printf '%f\n' prints the file name without directory, to match what your script is doing.

what you want to do is, read the output of the find command,
for every entry find returned, you want to get all (*) the files under that location
and then you want to check whether that filename matches the pattern you want
if it matches then add it to the tmpfile
while read -r dir; do
for file in "$dir"/*; do # will not match hidden files, unless dotglob is set
if [[ "$file" =~ prob[0-9]*_ ]]; then
echo "$file" >> "$tmpfile"
fi
done < <(find . -type d)
however find can do that alone
anubhava got me there ;)
so look his answer on how that's done

Related

Issues renaming files using bash script with input from .txt file with find -exec rename command

Update 01/12/2022
With triplee's helpful suggestions, I resolved it to take both files & directories by adding a comma in between f and d, the final code now looks like this:
while read -r old new;
do echo "replacing ${old} by ${new}" >&2
find '/path/to/dir' -depth -type d,f -name "$old" -exec rename
"s/${old}/${new}/" {} ';'
done <input.txt
Thank you!
Original request:
I am trying to rename a list of files (from $old to $new), all present in $homedir or in subdirectories in $homedir.
In the command line this line works to rename files in the subfolders:
find ${homedir}/ -name ${old} -exec rename "s/${old}/${new}/" */${old} ';'
However, when I want to implement this line in a simple bash script getting the $old and $new filenames from input.txt, it doesn't work anymore...
input.txt looks like this:
name_old name_new
name_old2 name_new2
etc...
the script looks like this:
#!/bin/bash
homedir='/path/to/dir'
cat input.txt | while read old new;
do
echo 'replacing' ${old} 'by' ${new}
find ${homedir}/ -name ${old} -exec rename "s/${old}/${new}/" */${old} ';'
done
After running the script, the text line from echo with $old and $new filenames being replaced is printed for the entire loop, but no files are renamed. No error is printed either. What am I missing? Your help would be greatly appreaciated!
I checked whether the $old and $new variables were correctly passed to the find -exec rename command, but because they are printed by echo that doesn't seem to be the issue.
If you add an echo, like -exec echo rename ..., you'll see what actually gets executed. I'd say that both the path to $old is wrong (you're not using the result of find in the -exec clause), and */$old isn't quoted and might be expanded by the shell before find ever gets to see it.
You're also having most other expansions unquoted, which can lead to all sorts of trouble.
You could do it in pure Bash (drop echo when output looks good):
shopt -s globstar
for f in **/"$old"; do echo mv "$f" "${f/%*/$new}"; done
Or with rename directly, though this would run into trouble if too many files match (drop -n when output looks good):
rename -n "s/$old\$/$new/" **/"$old"
Or with GNU find, using -execdir to run in the same directory as the matching file (drop echo when output looks good):
find -type f -name "$old" -execdir echo mv "$old" "$new" \;
And finally, a version with find that spawns just a single subshell (drop echo when output looks right):
find -type f -name "$old" -exec bash -c '
new=$1
shift
for f; do
echo mv "$f" "${f/%*/$new}"
done
' bash "$new" {} +
The argument to rename should be the file itself, not */${old}. You also have a number of quoting errors, and a useless cat).
#!/bin/bash
while read -r old new;
do
echo "replacing ${old} by ${new}" >&2
find /path/to/dir -name "$old" -exec rename "s/${old}/${new}/" {} ';'
done <input.txt
Running find multiple times on the same directory is hugely inefficient, though. Probably a better solution is to find all files in one go, and abort if it's not one of the files on the list.
find /path/to/dir -type f -exec sh -c '
for f in "$#"; do
awk -v f="$f" "f==\$1 { print \"s/\" \$1 \"/\" \$2 \"/\" }" "$0" |
xargs -I _ -r rename _ "$f"
done' input.txt {} +
(Untested; probably try with echo before you run this live.)

How to use bash string formatting to reverse date format?

I have a lot of files that are named as: MM-DD-YYYY.pdf. I want to rename them as YYYY-MM-DD.pdf I’m sure there is some bash magic to do this. What is it?
For files in the current directory:
for name in ./??-??-????.pdf; do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done
Recursively, in or under the current directory:
find . -type f -name '??-??-????.pdf' -exec bash -c '
for name do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done' bash {} +
Enabling the globstar shell option in bash lets us do the following (will also, like the above solution, handle all files in or below the current directory):
shopt -s globstar
for name in **/??-??-????.pdf; do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done
All three of these solutions uses a regular expression to pick out the relevant parts of the filenames, and then rearranges these parts into the new name. The only difference between them is how the list of pathnames is generated.
The code prefixes mv with echo for safety. To actually rename files, remove the echo (but run at least once with echo to see that it does what you want).
A direct approach example from the command line:
$ ls
10-01-2018.pdf 11-01-2018.pdf 12-01-2018.pdf
$ ls [0-9]*-[0-9]*-[0-9]*.pdf|sed -r 'p;s/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3-\1-\2/'|xargs -n2 mv
$ ls
2018-10-01.pdf 2018-11-01.pdf 2018-12-01.pdf
The ls output is piped to sed , then we use the p flag to print the argument without modifications, in other words, the original name of the file, and s to perform and output the conversion.
The ls + sed result is a combined output that consist of a sequence of old_file_name and new_file_name.
Finally we pipe the resulting feed through xargs to get the effective rename of the files.
From xargs man:
-n number Execute command using as many standard input arguments as possible, up to number arguments maximum.
You can use the following command very close to the one of klashxx:
for f in *.pdf; do echo "$f"; mv "$f" "$(echo "$f" | sed 's#\(..\)-\(..\)-\(....\)#\3-\2-\1#')"; done
before:
ls *.pdf
12-01-1998.pdf 12-03-2018.pdf
after:
ls *.pdf
1998-01-12.pdf 2018-03-12.pdf
Also if you have other pdf files that does not respect this format in your folder, what you can do is to select only the files that respect the format: MM-DD-YYYY.pdf to do so use the following command:
for f in `find . -maxdepth 1 -type f -regextype sed -regex './[0-9]\{2\}-[0-9]\{2\}-[0-9]\{4\}.pdf' | xargs -n1 basename`; do echo "$f"; mv "$f" "$(echo "$f" | sed 's#\(..\)-\(..\)-\(....\)#\3-\2-\1#')"; done
Explanations:
find . -maxdepth 1 -type f -regextype sed -regex './[0-9]\{2\}-[0-9]\{2\}-[0-9]\{4\}.pdf this find command will look only for files in the current working directory that respect your syntax and extract their basename (remove the ./ at the beginning, folders and other type of files that would have the same name are not taken into account, other *.pdf files are also ignored.
for each file you do a move and the resulting file name is computed using sed and back reference to the 3 groups for MM,DD and YYYY
For these simple filenames, using a more verbose pattern, you can simplify the body of the loop a bit:
twodigit=[[:digit:]][[:digit:]]
fourdigit="$twodigit$twodigit"
for f in $twodigit-$twodigit-$fourdigit.pdf; do
IFS=- read month day year <<< "${f%.pdf}"
mv "$f" "$year-$month-$day.pdf"
done
This is basically #Kusalananda's answer, but without the verbosity of regular-expression matching.

how many files find found?

I'm writing a script where I want to error out if the file I'm searching for exists in multiple locations, and tell the user the locations (the find results). So I've got a find like:
file_location=$(find $dir -name $file -print)
I'm thinking it should be simple to see if the file is found in multiple places, but I must not be matching what find uses to separate results with (seems like space sometimes, and a newline others). As such, rather than matching on that, I want to see if there are any characters after $file in $file_location.
I'm checking for
echo "$file_location" | grep -q "${file}."; then
and this still doesn't work. So I guess I don't care what I use, except I want to capture $file_location as a result of the find, and then check that. Can you suggest a good way?
Something like below if you want to avoid errors on eols and such
files=()
while IFS= read -d $'\0' -r match; do
files+=("$match")
done < <(find "$dir" -name "$file" -print0)
(${#files[#]} > 1) && printf '%s\n' "${files[#]}"
Or in bash 4+
shopt -s globstar dotglob
files=("$dir"/**/"$file")
((${#files[#]} > 1)) && printf '%s\n' "${files[#]}"
found=$(find "$dir" -name "$file" -ls)
count=$(wc -l <<< "$found")
if [ "$count" -gt 1 ]
then
echo "I found more than one:"
echo "$found"
fi
For zero matches found you will still receive a 1 because of the intransparent way the shell strips a trailing newline with the $() operator, so effectively one line output and zero lines output are both one line in the end. See xxd <<< "" for demonstration of the automatic appending of a newline when used as input again. A simple way to circumvent this is to add a fake newline in the beginning of the string, so no empty string can happen: found=$(echo; find …), and then subtract one from the number of lines.
EDIT: I changed the usage of -printf "%p\n" in my answer to -ls which performs a proper quoting of newlines. Otherwise file names with newlines in them would mess up the counting.
If you specify the full name in the find command, the matches on name will be unique. That is, if you say find -name "hello.txt", just files named hello.txt will be found.
What you can do is something like
find $dir -name $file -printf '.'
^^^^^^^^^^^
this will print as many . as matches are found. Then, to see how many files are found with this name it is just a matter of counting the number of dots you got as output.
No need for find here if you're running a new (4.0+) bash which can do recursive globbing itself; just load glob results directly into a shell array, and check its length:
shopt -s nullglob globstar # enable recursive globbing, and null results
file_locations=( "$dir"/**/"$file" )
echo "${#file_locations[#]} files named $file found under $dir; they are:"
printf ' %q\n' "${file_locations[#]}"
If you don't want to mess with nullglob, then:
shopt -s globstar # enable recursive globbing
file_locations=( "$dir"/**/"$file" )
# without nullglob, a failed match will return the glob expression itself
# to test for this, see if our first entry exists
if [[ -e ${file_locations[0]} ]]; then
echo "No instances of $file found under $dir"
else
echo "${#file_locations[#]} files named $file found under $dir; they are:"
printf ' %q\n' "${file_locations[#]}"
fi
You can still use an array to unambiguously read find results on old versions of bash; unlike more naive approaches, this will work even when file or directory names contain literal newlines:
file_locations=( )
while IFS= read -r -d '' filename; do
file_locations+=( "$filename" )
done < <(find "$dir" -type f -name "$file" -print0)
echo "${#file_locations[#]} files named $file found under $dir; they are:"
printf ' %q\n' "${file_locations[#]}"
I recommend using:
find . -name blong.txt -print0
Which tells find to join its output together with null \0 characters. Makes it easier to use awk with the -F flag or xargs with the -0 flag.
Try:
N=0
for i in `find $dir -name $file -printf '. '`
do
N=$((N+1))
done
echo $N

Shell Programming File Search and Append

I am trying to write a shell program that will search my current directory (say, my folder containing C code), read all files for the keywords "printf" or "fprintf", and append the include statement to the file if it isn't already done.
I have tried to write the search portion already (for now, all it does is search files and print the list of matching files), but it is not working. Included below is my code. What am I doing wrong?
EDIT: New code.
#!/bin/sh
#processes files ending in .c and appends statements if necessary
#search for files that meet criteria
for file in $( find . -type f )
do
echo $file
if grep -q printf "$file"
then
echo "File $file contains command"
fi
done
To execute commands in a subshell you need $( command ). Notice the $ before the parenthesis.
You don't need to store the list of files in a temporary variable, you can directly use
for file in $( find . ) ; do
echo "$file"
done
And with
find . -type f | grep somestring
you are not searching the file content but the file name (in my example all the files which name contains "somestring")
To grep the content of the files:
for file in $( find . -type f ) ; do
if grep -q printf "$file" ; then
echo "File $file contains printf"
fi
done
Note that if you match printf it will also match fprintf (as it contains printf)
If you want to search just files ending with .c you can use the -name option
find . -name "*.c" -type f
Use the -type f option to list only files.
In any case check if your grep has the -r option to search recursively
grep -r --include "*.c" printf .
You can do this sort of thing with sed -i, but I find that distasteful. Instead, it seems reasonable to use ed (sed is ed for streams, so it makes sense to use ed when you're not working with a stream).
#!/bin/sh
for i in *.c; do
grep -Fq '#include <stdio.h>' $i && continue
grep -Fq printf $i && ed -s $i << EOF > /dev/null
1
i
#include <stdio.h>
.
w
EOF
done

Bash rename extension recursive

I know there are a lot of things like this around, but either they don't work recursively or they are huge.
This is what I got:
find . -name "*.so" -exec mv {} `echo {} | sed s/.so/.dylib/` \;
When I just run the find part it gives me a list of files. When I run the sed part it replaces any .so with .dylib. When I run them together they don't work.
I replaced mv with echo to see what happened:
./AI/Interfaces/C/0.1/libAIInterface.so ./AI/Interfaces/C/0.1/libAIInterface.so
Nothing is replaced at all!
What is wrong?
This will do everything correctly:
find -L . -type f -name "*.so" -print0 | while IFS= read -r -d '' FNAME; do
mv -- "$FNAME" "${FNAME%.so}.dylib"
done
By correctly, we mean:
1) It will rename just the file extension (due to use of ${FNAME%.so}.dylib). All the other solutions using ${X/.so/.dylib} are incorrect as they wrongly rename the first occurrence of .so in the filename (e.g. x.so.so is renamed to x.dylib.so, or worse, ./libraries/libTemp.so-1.9.3/libTemp.so is renamed to ./libraries/libTemp.dylib-1.9.3/libTemp.so - an error).
2) It will handle spaces and any other special characters in filenames (except double quotes).
3) It will not change directories or other special files.
4) It will follow symbolic links into subdirectories and links to target files and rename the target file, not the link itself (the default behaviour of find is to process the symbolic link itself, not the file pointed to by the link).
for X in `find . -name "*.so"`
do
mv $X ${X/.so/.dylib}
done
A bash script to rename file extensions generally
#/bin/bash
find -L . -type f -name '*.'$1 -print0 | while IFS= read -r -d '' file; do
echo "renaming $file to $(basename ${file%.$1}.$2)";
mv -- "$file" "${file%.$1}.$2";
done
Credits to aps2012.
Usage
Create a file e.g. called ext-rename (no extension, so you can run it like a command) in e.g. /usr/bin (make sure /usr/bin is added to your $PATH)
run ext-rename [ext1] [ext2] anywhere in terminal, where [ext1] is renaming from and [ext2] is renaming to. An example use would be: ext-rename so dylib, which will rename any file with extension .so to same name but with extension .dylib.
What is wrong is that
echo {} | sed s/.so/.dylib/
is only executed once, before the find is launched, sed is given {} on its input, which doesn't match /.so/ and is left unchanged, so your resulting command line is
find . -name "*.so" -exec mv {} {}
if you have Bash 4
#!/bin/bash
shopt -s globstar
shopt -s nullglob
for file in /path/**/*.so
do
echo mv "$file" "${file/%.so}.dylib"
done
He needs recursion:
#!/bin/bash
function walk_tree {
local directory="$1"
local i
for i in "$directory"/*;
do
if [ "$i" = . -o "$i" = .. ]; then
continue
elif [ -d "$i" ]; then
walk_tree "$i"
elif [ "${i##*.}" = "so" ]; then
echo mv $i ${i%.*}.dylib
else
continue
fi
done
}
walk_tree "."

Resources