I've created a script that removes all zero-length files from a directory.
#!/bin/bash
find . -size 0 -type f -exec rm -i '{}' \;
It works well, except that it only works in the directory the script is actually located in and its sub-directories. I want to be able to use a directory as a command line argument (bash scriptname dirname) while executing the script and have it only search that directory and it's sub-directories, not the actual directory the script is located in. Is there a way to do this?
With $x you can access the x-th command line argument of your bash script. So in your case this should be something like
find $1 -size 0 -type f -exec rm -i '{}' \;
In Bash you can pass argument to your script. These argument can be used using $ sign. For example:
./hello 123 abc xyz
here $0 is your program name
$1 is 123 and so on
You can pass values and use them in your program from $1 to $9.
If your version of find accepts multiple paths, you can pass all the positional parameters like this:
find "$#" -size 0...
Note: you should not use $* for file names! it expands parameters into a string, so any file names with spaces or new lines in will break the command. "$#" preserves these, so is safe to use for this. If find doesn't accept multiple paths, you can loop over the parameters like this:
for dir in "$#"; do
find "$dir" -size 0...
done
Related
I have some wav files. For each of those files I would like to create a new text file with the same name (obviously with the wav extension being replaced with txt).
I first tried this:
find . -name *.wav -exec 'touch $(echo '{}" | sed -r 's/[^.]+\$/txt/')" \;
which outputted
< touch $(echo {} | sed -r 's/[^.]+$/txt/') ... ./The_stranglers-Golden_brown.wav > ?
Then find complained after I hit y key with:
find: ‘touch $(echo ./music.wav | sed -r 's/[^.]+$/txt/')’: No such file or directory
I figured out I was using a pipe and actually needed a shell. I then ran:
find . -name *.wav -exec sh -c 'touch $(echo "'{}"\" | sed -r 's/[^.]+\$/txt/')" \;
Which did the job.
Actually, I do not really get what is being done internally, but I guess a shell is spawned on every file right ? I fear this is memory costly.
Then, what if I need to run this command on a large bunch of files and directories !?
Now is there a way to do this in a more efficient way ?
Basically I need to transform the current file's name and to feed touch command.
Thank you.
This find with bash parameter-expansion will do the trick for you. You don't need sed at all.
find . -type f -name "*.wav" -exec sh -c 'x=$1; file="${x##*/}"; woe="${file%.*}"; touch "${woe}.txt"; ' sh {} \;
The idea is the part
x=$1 represents each of the entry returned from the output of find
file="${x##*/}" strips the path of the file leaving only the last file name part (only filename.ext)
The part woe="${file%.*}" stores the name without extension, and the new file is created with an extension .txt from the name found.
EDIT
Parameter expansion sets us free from using Command substitution $() sub-process and sed.
After looking at sh man page, I figured out that the command up above could be simplified.
Synopsis -c [-aCefnuvxIimqVEbp] [+aCefnuvxIimqVEbp] [-o option_name] [+o option_name] command_string [command_name [argument ...]]
...
-c Read commands from the command_string operand instead of from the stan‐dard input. Special parameter 0 will be set from the command_name oper‐and and the positional parameters ($1, $2, etc.) set from the remaining argument operands.
We can directly pass the file path, skipping the shell's name (which is useless inside the script anyway). So {} is passed as the command_name $0 which can be expanded right away.
We end up with a cleaner command.
find . -name *.wav -exec sh -c 'touch "${0%.*}".txt ;' {} \;
I'm starting in the shell script.I'm need to make the checksum of a lot of files, so I thought to automate the process using an shell script.
I make to scripts: the first script uses an recursive ls command with an egrep -v that receive as parameter the path of file inputed by me, these command is saved in a ambient variable that converts the output in a string, follow by a loop(for) that cut the output's string in lines and pass these lines as a parameter when calling the second script; The second script take this parameter and pass they as parameter to hashdeep command,wich in turn is saved in another ambient variable that, as in previous script,convert the output's command in a string and cut they using IFS,lastly I'm take the line of interest and put then in a text file.
The output is:
/home/douglas/Trampo/shell_scripts/2016-10-27-001757.jpg: No such file
or directory
----Checksum FILE: 2016-10-27-001757.jpg
----Checksum HASH:
the issue is: I sets as parameter the directory ~/Pictures but in the output error they return another directory,/home/douglas/Trampo/shell_scripts/(the own directory), in this case, the file 2016-10-27-001757.jpg is in the ~/Pictures directory,why the script is going in its own directory?
First script:
#/bin/bash
arquivos=$(ls -R $1 | egrep -v '^d')
for linha in $arquivos
do
bash ./task2.sh $linha
done
second script:
#/bin/bash
checksum=$(hashdeep $1)
concatenado=''
for i in $checksum
do
concatenado+=$i
done
IFS=',' read -ra ADDR <<< "$concatenado"
echo
echo '----Checksum FILE:' $1
echo '----Checksum HASH:' ${ADDR[4]}
echo
echo ${ADDR[4]} >> ~/Trampo/shell_scripts/txt2.txt
I think that's...sorry about the English grammatic errors.
I hope that the question has become clear.
Thanks ins advanced!
There are several wrong in the first script alone.
When running ls in recursive mode using -R, the output is listed per directory and each file is listed relative to their parent instead of full pathname.
ls -R doesn't list the directory in long format as implied by | grep -v ^d where it seems you are looking for files (non directories).
In your specific case, the missing file 2016-10-27-001757.jpg is in a subdirectory but you lost the location by using ls -R.
Do not parse the output of ls. Use find and you won't have the same issue.
First script can be replaced by a single line.
Try this:
#!/bin/bash
find $1 -type f -exec ./task2.sh "{}" \;
Or if you prefer using xargs, try this:
#!/bin/bash
find $1 -type f -print0 | xargs -0 -n1 -I{} ./task2.sh "{}"
Note: enclosing {} in quotes ensures that task2.sh receives a complete filename even if it contains spaces.
In task2.sh the parameter $1 should also be quoted "$1".
If task2.sh is executable, you are all set. If not, add bash in the line so it reads as:
find $1 -type f -exec bash ./task2.sh "{}" \;
task2.sh, though not posted in the original question, is not executable. It has a missing execute permission.
Add execute permission to it by running chmod like:
chmod a+x task2.sh
Goodluck.
I created two scripts to make a checksum of a lot of files, in one of them I use the 'find' command to search the files in a specific directory, but when the file has a name with space character, file one.txt for example, the script takes this as two files: file and one. I know that error is in the 'find' command line, but I don't know what I do wrong.
My script:
#!/bin/bash
find $1 -type f -exec bash ./task2.sh "{}" \;
In order to search for a file with spaces in the name you either have to enclose the argument in quotes or escape the spaces. Like:
find "test file.txt" or alternatively find test\ file.txt.
The easiest fix in your particular use case is to enclose $1 in quotes:
find "$1" -type f -exec bash ./task2.sh "{}" \;
I was thinking if using a BASH script is possible without manually copying each file that is in this parent directory
"/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk
/System/Library/PrivateFrameworks"
So in this folder PrivateFrameworks, there are many subfolders and in each subfolder it consists of the file that I would like to copy it out to another location. So the structure of the path looks like this:
-PrivateFrameworks
-AccessibilityUI.framework
-AccessibilityUI <- copy this
-AccountSettings.framework
-AccountSettings <- copy this
I do not want the option of copying the entire content in the folder as there might be cases where the folders contain files which I do not want to copy. So the only way I thought of is to copy by the file extension. However as you can see, the files which I specified for copying does not have an extension(I think?). I am new to bash scripting so I am not familiar if this can be done with it.
To copy all files in or below the current directory that do not have extensions, use:
find . ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
The find . command looks for all files in or below the current directory. The argument -name '*.*' would restrict that search to files that have extensions. By preceding it with a not (!), however, we get all files that do not have an extension. Then, -exec cp -t /your/destination/dir/ {} + tells find to copy those files to the destination.
To do the above starting in your directory with the long name, use:
find "/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk/System/Library/PrivateFrameworks" ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
UPDATE: The unix tag on this question has been removed and replaced with a OSX tag. That means we can't use the -t option on cp. The workaround is:
find . ! -name '*.*' -exec cp {} /your/destination/dir/ \;
This is less efficient because a new cp process is created for every file moved instead of once for all the files that fit on a command line. But, it will accomplish the same thing.
MORE: There are two variations of the -exec clause of a find command. In the first use above, the clause ended with {} + which tells find to fill up the end of command line with as many file names as will fit on the line.
Since OSX lacks cp -t, however, we have to put the file name in the middle of the command. So, we put {} where we want the file name and then, to signal to find where the end of the exec command is, we add a semicolon. There is a trick, though. Because bash would normally consume the semicolon itself rather than pass it on to find, we have to escape the semicolon with a backslash. That way bash gives it to the find command.
sh SCRIPT.sh copy-from-directory .extension copy-to-directory
FROM_DIR=$1
EXTENSION=$2
TO_DIR=$3
USAGE="""Usage: sh SCRIPT.sh copy-from-directory .extension copy-to-directory
- EXAMPLE: sh SCRIPT.sh PrivateFrameworks .framework .
- NOTE: 'copy-to-directory' argument is optional
"""
## print usage if less than 2 args
if [[ $# < 2 ]]; then echo "${USAGE}" && exit 1 ; fi
## set copy-to-dir default args
if [[ -z "$TO_DIR" ]] ; then TO_DIR=$PWD ; fi
## DO SOMETHING...
## find directories; find target file;
## copy target file to copy-to-dir if file exist
find $FROM_DIR -type d | while read DIR ; do
FILE_TO_COPY=$(echo $DIR | xargs basename | sed "s/$EXTENSION//")
if [[ -f $DIR/$FILE_TO_COPY ]] ; then
cp $DIR/$FILE_TO_COPY $TO_DIR
fi
done
I have read many times that if I want to execute something over all subdirectories I should run something like one of these:
find . -name '*' -exec command arguments {} \;
find . -type f -print0 | xargs -0 command arguments
find . -type f | xargs -I {} command arguments {} arguments
The problem is that it works well with corefunctions, but not as expected when the command is a user-defined function or a script. How to fix it?
So what I am looking for is a line of code or a script in which I can replace command for myfunction or myscript.sh and it goes to every single subdirectory from current directory and executes such function or script there, with whatever arguments I supply.
Explaining in another way, I want something to work over all subdirectories as nicely as for file in *; do command_myfunction_or_script.sh arguments $file; done works over current directory.
Instead of -exec, try -execdir.
It may be that in some cases you need to use bash:
foo () { echo $1; }
export -f foo
find . -type f -name '*.txt' -exec bash -c 'foo arg arg' \;
The last line could be:
find . -type f -name '*.txt' -exec bash -c 'foo "$#"' _ arg arg \;
Depending on what args might need expanding and when. The underscore represents $0.
You could use -execdir where I have -exec if that's needed.
The examples that you give, such as:
find . -name '*' -exec command arguments {} \;
Don't go to every single subdirectory from current directory and execute command there, but rather execute command from the current directory with the path to each file listed by the find as an argument.
If what you want is to actually change directory and execute a script, you could try something like this:
STDIR=$PWD; IFS=$'\n'; for dir in $(find . -type d); do cd $dir; /path/to/command; cd $STDIR; done; unset IFS
Here the current directory is saved to STDIR and the bash Internal Field Separator is set to a newline so names won't split on spaces. Then for each directory (-type d) that find returns, we cd to that directory, execute the command (using the full path here as changing directories will break a relative path) and then cd back to the starting directory.
There may be some way to use find with a function, but it won't be terribly elegant. If you have bash 4, what you probably want to do is use globstar:
shopt -s globstar
for file in **/*; do
myfunction "$file"
done
If you're looking for compatibility with POSIX or older versions of bash, you will be forced to source the file defining your function when you invoke bash. So something like this:
find <args> -exec bash -c '. funcfile;
for file; do
myfunction "$file"
done' _ {} +
But that's just ugly. When I get to this point, I usually just put my function in a script on my PATH and live with it.
If you want to use a bash function, this is one way.
work ()
{
local file="$1"
local dir=$(dirname $file)
pushd "$dir"
echo "in directory $(pwd) working with file $(basename $file)"
popd
}
find . -name '*' | while read line;
do
work "$line"
done