How to use find to bundle files - bash

I'm struggling with this task:
Write a script that takes as input a directory (path) name and a
filename base (such as ".", "*.txt", etc). The script shall search the
given directory tree, find all files matching the given filename, and
bundle them into a single file. Executing the given file as a script
should return the original files.
Can anyone help me?
First i tried to do the find part like this:
#!/bin/bash
filebase=$2
path=$1
find $path \( -name $base \)
Then i found this code for bundle, but I dont know how to combine them.
for i in $#; do
echo "echo unpacking file $i"
echo "cat > $i <<EOF"
cat $i
echo "EOF"
done

Going on tripleee's comment you can use shar to generate a self extracting archive.
You can take the output of find and pass it through to shar in order to generate the archive.
#!/bin/bash
path="$1"
filebase="$2"
archive="$3"
find "$path" -type f -name "$filebase" | xargs shar > "$archive"
The -type f option passed to find will restrict the search to files (i.e. excludes directories), which seems to be a required limitation.
If the above script is called archive_script.sh, and is executable, then you can call it as below for example:
./archive_script.sh /etc '*.txt' etc-text.shar
This will create a shar archive of all the .txt files in /etc.

Related

Bash find all zip files within subfolders, bind files and their path into loop and perform some tasks

I need to write script on Bash, which will find all zip files within subfolders, bind files and their path into some file, and then loop through this list and perform some tasks with all zip files (e.g. extract, check files within zip and then delete extracted files).
Some thoughts:
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo $DIR ##current dir
ls $DIR\*.zip
Then bind result to file (ziplist.txt for example)
Then read this file in loop per string:
if [[ some result ]] ; then
while IFS= read -r ; do
done <$DIR/ziplist.txt
How this can be done in best way? Sorry, I've have limited experience with bash.
This should do the trick :
for filename in $(find . -name '*.zip'); do
# Your operations here
done
If you want to keep using a while you can do:
while IFS= read -r ; do
done < <(find . -name '*.zip')
You can use find in many ways to do this.
"Your way" would be to create a temporary file and loop through this:
find /your/path -name '*.zip' > /tmp/zips
If you don't necessarily need the collection of files but just want to perform a task to them as you find them, you can use find's exec:
find /your/path -name '*.zip' -exec /path/to/your/worker/script.sh {} \;
which will execute your script.sh for every zip file it finds, with the full zip file path as the argument.
Hope this helps.

Bash script copying certain type of file to another location

I was thinking if using a BASH script is possible without manually copying each file that is in this parent directory
"/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk
/System/Library/PrivateFrameworks"
So in this folder PrivateFrameworks, there are many subfolders and in each subfolder it consists of the file that I would like to copy it out to another location. So the structure of the path looks like this:
-PrivateFrameworks
-AccessibilityUI.framework
-AccessibilityUI <- copy this
-AccountSettings.framework
-AccountSettings <- copy this
I do not want the option of copying the entire content in the folder as there might be cases where the folders contain files which I do not want to copy. So the only way I thought of is to copy by the file extension. However as you can see, the files which I specified for copying does not have an extension(I think?). I am new to bash scripting so I am not familiar if this can be done with it.
To copy all files in or below the current directory that do not have extensions, use:
find . ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
The find . command looks for all files in or below the current directory. The argument -name '*.*' would restrict that search to files that have extensions. By preceding it with a not (!), however, we get all files that do not have an extension. Then, -exec cp -t /your/destination/dir/ {} + tells find to copy those files to the destination.
To do the above starting in your directory with the long name, use:
find "/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk/System/Library/PrivateFrameworks" ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
UPDATE: The unix tag on this question has been removed and replaced with a OSX tag. That means we can't use the -t option on cp. The workaround is:
find . ! -name '*.*' -exec cp {} /your/destination/dir/ \;
This is less efficient because a new cp process is created for every file moved instead of once for all the files that fit on a command line. But, it will accomplish the same thing.
MORE: There are two variations of the -exec clause of a find command. In the first use above, the clause ended with {} + which tells find to fill up the end of command line with as many file names as will fit on the line.
Since OSX lacks cp -t, however, we have to put the file name in the middle of the command. So, we put {} where we want the file name and then, to signal to find where the end of the exec command is, we add a semicolon. There is a trick, though. Because bash would normally consume the semicolon itself rather than pass it on to find, we have to escape the semicolon with a backslash. That way bash gives it to the find command.
sh SCRIPT.sh copy-from-directory .extension copy-to-directory
FROM_DIR=$1
EXTENSION=$2
TO_DIR=$3
USAGE="""Usage: sh SCRIPT.sh copy-from-directory .extension copy-to-directory
- EXAMPLE: sh SCRIPT.sh PrivateFrameworks .framework .
- NOTE: 'copy-to-directory' argument is optional
"""
## print usage if less than 2 args
if [[ $# < 2 ]]; then echo "${USAGE}" && exit 1 ; fi
## set copy-to-dir default args
if [[ -z "$TO_DIR" ]] ; then TO_DIR=$PWD ; fi
## DO SOMETHING...
## find directories; find target file;
## copy target file to copy-to-dir if file exist
find $FROM_DIR -type d | while read DIR ; do
FILE_TO_COPY=$(echo $DIR | xargs basename | sed "s/$EXTENSION//")
if [[ -f $DIR/$FILE_TO_COPY ]] ; then
cp $DIR/$FILE_TO_COPY $TO_DIR
fi
done

parsing and changing the files in all sub directories

I would like to parse all the files *.c in the sub directories and prefix a string to the file name and place file in the same sub-directory.
For example, if there's a file in dir1/subdir1/test.c , I would like to change that file name to xyztest.c and place it in dir1/subdir1/. How to do that?
I would like to do in bash script.
Thanks,
What you need is:
Find all c files in a directory (use find command)
Separate the filname and dirname (use basename and dirname)
Move dirname/filename to dirname/prefix_filename
That should do it.
A find command with while loop should do that:
PREFIX=xyz;
while read line
do
path="$(dirname $line)"
base="$(basename $line)";
mv "${line}" "$path/${PREFIX}${base}"
done < <(find dir1 -name "*.c")
find dir -name '*.c' -printf 'mv "%p" "%h/xyz%f"\n' | sh
This will fail if you have file names with double quotes, or varous other shell metacharacters; but if you don't, it's a nice one-liner.

Bash scripting, loop through files in folder fails

I'm looping through certain files (all files starting with MOVIE) in a folder with this bash script code:
for i in MY-FOLDER/MOVIE*
do
which works fine when there are files in the folder. But when there aren't any, it somehow goes on with one file which it thinks is named MY-FOLDER/MOVIE*.
How can I avoid it to enter the things after
do
if there aren't any files in the folder?
With the nullglob option.
$ shopt -s nullglob
$ for i in zzz* ; do echo "$i" ; done
$
for i in $(find MY-FOLDER/MOVIE -type f); do
echo $i
done
The find utility is one of the Swiss Army knives of linux. It starts at the directory you give it and finds all files in all subdirectories, according to the options you give it.
-type f will find only regular files (not directories).
As I wrote it, the command will find files in subdirectories as well; you can prevent that by adding -maxdepth 1
Edit, 8 years later (thanks for the comment, #tadman!)
You can avoid the loop altogether with
find . -type f -exec echo "{}" \;
This tells find to echo the name of each file by substituting its name for {}. The escaped semicolon is necessary to terminate the command that's passed to -exec.
for file in MY-FOLDER/MOVIE*
do
# Skip if not a file
test -f "$file" || continue
# Now you know it's a file.
...
done

bash: processing (recursively) through all files in a directory

I want to write a bash script that (recursively) processes all files of a certain type.
I know I can get the matching file list by using find thusly:
find . -name "*.ext"
I want to use this in a script:
recursively obatin list of files with a given extension
obtain the full file pathname
pass the full pathname to another script
Check the return code from the script. If non zero, log the name of the file that could not be processed.
My first attempt looks (pseudocode) like this:
ROOT_DIR = ~/work/projects
cd $ROOT_DIR
for f in `find . -name "*.ext"`
do
#need to lop off leading './' from filename, but I havent worked out how to use
#cut yet
newname = `echo $f | cut -c 3
filename = "$ROOT_DIR/$newname"
retcode = ./some_other_script $filename
if $retcode ne 0
logError("Failed to process file: $filename")
done
This is my first attempt at writing a bash script, so the snippet above is not likely to run. Hopefully though, the logic of what I'm trying to do is clear enough, and someone can show how to join the dots and convert the pseudocode above into a working script.
I am running on Ubuntu
find . -name '*.ext' \( -exec ./some_other_script "$PWD"/{} \; -o -print \)
Using | while read to iterate over file names is fine as long as there are no files with carrier return to be processed:
find . -name '*.ext' | while IFS=$'\n' read -r FILE; do
process "$(readlink -f "$FILE")" || echo "error processing: $FILE"
done

Resources