script read file contents and copy files - bash

I wrote a script in bash that should read the contents of a text file, look for the corresponding files for each line, and copy them to another folder. It's not copying all the files, only two, the third and the last.
#!/bin/bash
filelist=~/Desktop/file.txt
sourcedir=~/ownCloud2
destdir=~/Desktop/file_out
while read line; do
find $sourcedir -name $line -exec cp '{}' $destdir/$line \;
echo find $sourcedir -name $line
sleep 1
done < "$filelist"
If I use this string on the command line it finds me and copies the file.
find ~/ownCloud2 -name 123456AA.pdf -exec cp '{}' ~/Desktop/file_out/123456AA.pdf \;
If I use the script instead it doesn't work.

I used your exact script and had no problems, for both bash or sh, so maybe you are using another shell in your shebang line.
Use find only when you need to find the file "somewhere" in multiple directories under the search start point.
If you know the exact directory in which the file is located, there is no need to use find. Just use the simple copy command.
Also, if you use "cp -v ..." instead of the "echo", you might see what the command is actually doing, from which you might spot what is wrong.

Related

Shell script affects only on file instead of all of them

I am using a shell script to remove the XML tags of a set of files in a folder. This is how my file looks like:
#!/bin/sh
find texts -type f -name '*.xml' -exec sh -c '
mkdir -p modified
file="$0"
sed "s/<[^>]*>//g" "$file" > modified/modified_texts
' {} ';'
This is supposed to take all the files(using $file) in the "texts" folder, remove their XML tags and place the files without the XML tags into the file "modified".
The problem is that, instead of taking all the files, it is using just one, and filling the file "modified_texts" with the content of one of the files(without XML tags, that part works).
I don't really understand what I'm doing wrong, so I would appreciate any help.
Instead of doing the output redirection (with truncation!) for every sed command, move it to the outer scope, so the output file is opened (and its prior contents are truncated) only once, before find is started at all.
#!/bin/sh
mkdir -p modified # this only needs to happen once, so move it outside
find texts -type f -name '*.xml' -exec sed 's/<[^>]*>//g' {} ';' > modified/modified_texts

Find not searching for variable within a loop in bash

I'm trying to find filenames in a .csv (all in one column) to a directory in a project to determine which files exist in the project. I'm going to output the directories of these files to another csv but can't get find to return anything when using this loop. These files often have underscores, hyphens, and periods but no other special characters or whitespace
#!/bin/bash
while read file; do
echo $file
find "/Users/myname/Documents/project/software 6.2.0.1/webclient" -name "$file" -exec echo '{}' \;
done < test.csv
The echo $file runs fine but find wont search for it correctly or run my -exec commands. It will however run correctly if I substitute -name "$file" with -name "*.ext". The find command also runs fine if I just run it in a terminal while the variable is set.
Edits:
Example .csv and script will output same thing to console:
Organization_createLocation.uim
Organization_listLocationPopup.properties
Organization_createLocation.properties
Organization_modifyLocationView.properties
Organization_listLocationPopup.uim
Organization_modifyLocationView.vim
After running bash -x ./script.sh my console reads:
+ read file
+ echo $'Organization_createLocation.uim\r'
Organization_createLocation.uim
+ find '/Users/myname/Documents/project/software 6.2.0.1/webclient' -name
$'Organization_createLocation.uim\r' -exec echo '{}' ';'`
For every item in the .csv
There were carriage returns on the end of each line. bash -x ./myscript.sh helped me find the carriage returns and dos2unix solved my issue. It is running fine now.

Bash script copying certain type of file to another location

I was thinking if using a BASH script is possible without manually copying each file that is in this parent directory
"/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk
/System/Library/PrivateFrameworks"
So in this folder PrivateFrameworks, there are many subfolders and in each subfolder it consists of the file that I would like to copy it out to another location. So the structure of the path looks like this:
-PrivateFrameworks
-AccessibilityUI.framework
-AccessibilityUI <- copy this
-AccountSettings.framework
-AccountSettings <- copy this
I do not want the option of copying the entire content in the folder as there might be cases where the folders contain files which I do not want to copy. So the only way I thought of is to copy by the file extension. However as you can see, the files which I specified for copying does not have an extension(I think?). I am new to bash scripting so I am not familiar if this can be done with it.
To copy all files in or below the current directory that do not have extensions, use:
find . ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
The find . command looks for all files in or below the current directory. The argument -name '*.*' would restrict that search to files that have extensions. By preceding it with a not (!), however, we get all files that do not have an extension. Then, -exec cp -t /your/destination/dir/ {} + tells find to copy those files to the destination.
To do the above starting in your directory with the long name, use:
find "/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk/System/Library/PrivateFrameworks" ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
UPDATE: The unix tag on this question has been removed and replaced with a OSX tag. That means we can't use the -t option on cp. The workaround is:
find . ! -name '*.*' -exec cp {} /your/destination/dir/ \;
This is less efficient because a new cp process is created for every file moved instead of once for all the files that fit on a command line. But, it will accomplish the same thing.
MORE: There are two variations of the -exec clause of a find command. In the first use above, the clause ended with {} + which tells find to fill up the end of command line with as many file names as will fit on the line.
Since OSX lacks cp -t, however, we have to put the file name in the middle of the command. So, we put {} where we want the file name and then, to signal to find where the end of the exec command is, we add a semicolon. There is a trick, though. Because bash would normally consume the semicolon itself rather than pass it on to find, we have to escape the semicolon with a backslash. That way bash gives it to the find command.
sh SCRIPT.sh copy-from-directory .extension copy-to-directory
FROM_DIR=$1
EXTENSION=$2
TO_DIR=$3
USAGE="""Usage: sh SCRIPT.sh copy-from-directory .extension copy-to-directory
- EXAMPLE: sh SCRIPT.sh PrivateFrameworks .framework .
- NOTE: 'copy-to-directory' argument is optional
"""
## print usage if less than 2 args
if [[ $# < 2 ]]; then echo "${USAGE}" && exit 1 ; fi
## set copy-to-dir default args
if [[ -z "$TO_DIR" ]] ; then TO_DIR=$PWD ; fi
## DO SOMETHING...
## find directories; find target file;
## copy target file to copy-to-dir if file exist
find $FROM_DIR -type d | while read DIR ; do
FILE_TO_COPY=$(echo $DIR | xargs basename | sed "s/$EXTENSION//")
if [[ -f $DIR/$FILE_TO_COPY ]] ; then
cp $DIR/$FILE_TO_COPY $TO_DIR
fi
done

How to automate dos2unix using shell script?

I have a bunch of xml files in a directory that need to have the dos2unix command performed on them and new files will be added every so often. I Instead of manually performing dos2unix command on each files everytime I would like to automate it all with a script. I have never even looked at a shell script in my life but so far I have this from what I have read on a few tutorials:
FILES=/tmp/testFiles/*
for f in $FILES
do
fname=`basename $f`
dos2unix *.xml $f $fname
done
However I keep getting the 'usage' output showing up. I think the problem is that I am not assigning the name of the new file correctly (fname).
The reason you're getting a usage message is that dos2unix doesn't take the extra arguments you're supplying. It will, however, accept multiple filenames (also via globs). You don't need a loop unless you're processing more files than can be accepted on the command line.
dos2unix /tmp/testFiles/*.xml
Should be all you need, unless you need recursion:
find /tmp/testFiles -name '*.xml' -exec dos2unix {} +
(for GNU find)
If all files are in one directory (no recursion needed) then you're almost there.
for file in /tmp/testFiles/*.xml ; do
dos2unix "$file"
done
By default dos2unix should convert in place and overwrite the original.
If recursion is needed you'll have to use find as well:
find /tmp/testFiles -name '*.xml' -print0 | while IFS= read -d '' file ; do
dos2unix "$file"
done
Which will work on all files ending with .xml in /tmp/testFiles/ and all of its sub-directories.
If no other step are required you can skip the shell loop entirely:
Non-recursive:
find /tmp/testFiles -maxdepth 1 -name '*.xml' -exec dos2unix {} +
And for recursive:
find /tmp/testFiles -name '*.xml' -exec dos2unix {} +
In your original command I see you finding the base name of each file name and trying to pass that to dos2unix, but your intent is not clear. Later, in a comment, you say you just want to overwrite the files. My solution performs the conversion, creates no backups and overwrites the original with the converted version. I hope this was your intent.
mkdir /tmp/testFiles/converted/
for f in /tmp/testFiles/*.xml
do
fname=`basename $f`
dos2unix $f ${f/testFiles\//testFiles\/converted\/}
# or for pure sh:
# dos2unix $f $(echo $f | sed s#testFiles/#testFiles/converted/#)
done
The result will be saved in the converted/ subdirectory.
The construction ${f/testFiles\//testFiles\/converted\/} (thanks to Rush)
or sed is used here to add converted/ before the name of the file:
$ echo /tmp/testFiles/1.xml | sed s#testFiles/#testFiles/converted/#
/tmp/testFiles/converted/1.xml
It is not clear which implementation of dos2unix you are using. Different implementations require different arguments. There are many different implementations around.
On RedHat/Fedora/Suse Linux you could just type
dos2unix /tmp/testFiles/*.xml
On SunOS you are required to give an input and output file name, and the above command would destroy several of your files.

Bash scripting, loop through files in folder fails

I'm looping through certain files (all files starting with MOVIE) in a folder with this bash script code:
for i in MY-FOLDER/MOVIE*
do
which works fine when there are files in the folder. But when there aren't any, it somehow goes on with one file which it thinks is named MY-FOLDER/MOVIE*.
How can I avoid it to enter the things after
do
if there aren't any files in the folder?
With the nullglob option.
$ shopt -s nullglob
$ for i in zzz* ; do echo "$i" ; done
$
for i in $(find MY-FOLDER/MOVIE -type f); do
echo $i
done
The find utility is one of the Swiss Army knives of linux. It starts at the directory you give it and finds all files in all subdirectories, according to the options you give it.
-type f will find only regular files (not directories).
As I wrote it, the command will find files in subdirectories as well; you can prevent that by adding -maxdepth 1
Edit, 8 years later (thanks for the comment, #tadman!)
You can avoid the loop altogether with
find . -type f -exec echo "{}" \;
This tells find to echo the name of each file by substituting its name for {}. The escaped semicolon is necessary to terminate the command that's passed to -exec.
for file in MY-FOLDER/MOVIE*
do
# Skip if not a file
test -f "$file" || continue
# Now you know it's a file.
...
done

Resources