I have a directory with sub-directories and files with names that start with a string similar to the sub-directories; e.g.
bar/
foo-1/ (dir)
foo-1-001.txt
foo-1-002.txt
foo-1-003.txt
foo-2/ (dir)
foo-2-001.txt
foo-2-002.txt
foo-2-003.txt
foo-3/ (dir)
foo-3-001.txt
foo-3-002.txt
foo-3-003.txt
etc.
All files are currently at the same level. I'd like to move the corresponding .txt files into their similarly-named directories with a script (there are > 9500 in my current situation).
I've written the following, but I'm missing something, as I can't get the files to move.
#!/bin/sh
# directory basename processing for derivatives
# create directory list in a text file
find ./ -type d > directoryList.txt
# setup while loop for moving text files around
FILE="directoryList.txt"
exec 3<&0
exec 0<$FILE
while read line
do
echo "This is a directory:`basename $line`"
filemoves=`find ./ -type f -name '*.txt' \! -name 'directoryList.txt' | sed 's|-[0-9]\{3\}\.txt$||g'`
if [ "`basename $filemoves`" == "$line" ]
then
cp $filemoves $line/
echo "copied $filemoves to $line"
fi
done
exec 0<&3
Things seem to work OK until I get to the if. I'm working across a number of *nix, so I have to be careful what arguments I'm throwing around (RHEL, FreeBSD, and possibly Mac OS X, too).
Assuming your files really match the pattern above (everything before the last dash is the directory name) this should do it:
for thefile in *.txt ; do mv -v $thefile ${thefile%-*}; done
and if it tells you command line too long (expanding *.txt into 4900 files is a lot) try this:
find . -name '*.txt' | while read thefile ; do mv -v $thefile ${thefile%-*} ; done
I'm not a shell script expert but I'm aware that in many shells (and according to this page on the internet: http://www.vectorsite.net/tsshell.html this includes SH), string comparison is done with the "=" operator, not "==".
[ "$shvar" = "fox" ] String comparison, true if match.
[code block removed]
Reason 1. Used ls instead of globbing
Reason 2. Used mv $VAR1 $VAR2 style moving without quoting variables
Related
I am writing a bash script and want it to tell me if the names of the files in a directory appear in a text file and if not, remove them.
Something like this:
counter = 1
numFiles = ls -1 TestDir/ | wc -l
while [$counter -lt $numFiles]
do
if [file in TestDir/ not in fileNames.txt]
then
rm file
fi
((counter++))
done
So what I need help with is the if statement, which is still pseudo-code.
You can simplify your script logic a lot :
#/bin/bash
# for loop to iterate over all files in the testdir
for file in TestDir/*
do
# if grep exit code is 1 (file not found in the text document), we delete the file
[[ ! $(grep -x "$file" fileNames.txt &> /dev/null) ]] && rm "$file"
done
It looks like you've got a solution that works, but I thought I'd offer this one as well, as it might still be of help to you or someone else.
find /Path/To/TestDir -type f ! -name '.*' -exec basename {} + | grep -xvF -f /Path/To/filenames.txt"
Breakdown
find: This gets file paths in the specified directory (which would be TestDir) that match the given criteria. In this case, I've specified it return only regular files (-type f) whose names don't start with a period (-name '.*'). It then uses its own builtin utility to execute the next command:
basename: Given a file path (which is what find spits out), it will return the base filename only, or, more specifically, everything after the last /.
|: This is a command pipe, that takes the output of the previous command to use as input in the next command.
grep: This is a regular-expression matching utility that, in this case, is given two lists of files: one fed in through the pipe from find—the files of your TestDir directory; and the files listed in filenames.txt. Ordinarily, the filenames in the text file would be used to match against filenames returned by find, and those that match would be given as the output. However, the -v flag inverts the matching process, so that grep returns those filenames that do not match.
What results is a list of files that exist in the directory TestDir, but do not appear in the filenames.txt file. These are the files you wish to delete, so you can simply use this line of code inside a parameter expansion $(...) to supply rm with the files it's able to delete.
The full command chain—after you cd into TestDir—looks like this:
rm $(find . -type f ! -name '.*' -exec basename {} + | grep -xvF -f filenames.txt")
Part of my Bash script's intended function is to accept a directory name and then iterate through every file.
Here is part of my code:
#! /bin/bash
# sameln --- remove duplicate copies of files in specified directory
D=$1
cd $D #go to directory specified as default input
fileNum=0 #save file numbers
DIR=".*|*"
for f in $DIR #for every file in the directory
do
files[$fileNum]=$f #save that file into the array
fileNum=$((fileNum+1)) #increment the fileNum
echo aFile
done
The echo statement is for testing purposes. I passed as an argument the name of a directory with four regular files, and I expected my output to look like:
aFile
aFile
aFile
aFile
but the echo statement only shows up once.
A single operation
Use find for this, it's perfect for it.
find <dirname> -maxdepth 1 -type f -exec echo "{}" \;
The flags explained: maxdepth defines how deep int he hierarchy you want to look (dirs in dirs in dirs), type f defines files, as opposed to type d for dirs. And exec allows you to process the found file/dir, which is can be accessed through {}. You can alternatively pass it to a bash function to perform more tasks.
This simple bash script takes a dir as argument and lists all it's files:
#!/bin/bash
find "$1" -maxdepth 1 -type f -exec echo "{}" \;
Note that the last line is identical to find "$1" -maxdepth 1 -type f -print0.
Performing multiple tasks
Using find one can also perform multiple tasks by either piping to xargs or while read, but I prefer to use a function. An example:
#!/bin/bash
function dostuff {
# echo filename
echo "filename: $1"
# remove extension from file
mv "$1" "${1%.*}"
# get containing dir of file
dir="${1%/*}"
# get filename without containing dirs
file="${1##*/}"
# do more stuff like echoing results
echo "containing dir = $dir and file was called $file"
}; export -f dostuff
# export the function so you can call it in a subshell (important!!!)
find . -maxdepth 1 -type f -exec bash -c 'dostuff "{}"' \;
Note that the function needs to be exported, as you can see. This so you can call it in a subshell, which will be opened by executing bash -c 'dostuff'. To test it out, I suggest your comment to mv command in dostuff otherwise you will remove all your extensions haha.
Also note that this is safe for weird characters like spaces in filenames so no worries there.
Closing note
If you decide to go with the find command, which is a great choice, I advise you read up on it because it is a very powerful tool. A simple man find will teach you a lot and you will learn a lot of useful options to find. You can for instance quit from find once it has found a result, this can be handy to check if dirs contain videos or not for example in a rapid way. It's truly an amazing tool that can be used on various occasions and often you'll be done with a one liner (kinda like awk).
You can directly read the files into the array, then iterate through them:
#! /bin/bash
cd $1
files=(*)
for f in "${files[#]}"
do
echo $f
done
If you are iterating only files below a single directory, you are better off using simple filename/path expansion to avoid certain uncommon filename issues. The following will iterate through all files in a given directory passed as the first argument (default ./):
#!/bin/bash
srchdir="${1:-.}"
for i in "$srchdir"/*; do
printf " %s\n" "$i"
done
If you must iterate below an entire subtree that includes numerous branches, then find will likely be your only choice. However, be aware that using find or ls to populate a for loop brings with it the potential for problems with embedded characters such as a \n within a filename, etc. See Why for i in $(find . -type f) # is wrong even though unavoidable at times.
I was thinking if using a BASH script is possible without manually copying each file that is in this parent directory
"/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk
/System/Library/PrivateFrameworks"
So in this folder PrivateFrameworks, there are many subfolders and in each subfolder it consists of the file that I would like to copy it out to another location. So the structure of the path looks like this:
-PrivateFrameworks
-AccessibilityUI.framework
-AccessibilityUI <- copy this
-AccountSettings.framework
-AccountSettings <- copy this
I do not want the option of copying the entire content in the folder as there might be cases where the folders contain files which I do not want to copy. So the only way I thought of is to copy by the file extension. However as you can see, the files which I specified for copying does not have an extension(I think?). I am new to bash scripting so I am not familiar if this can be done with it.
To copy all files in or below the current directory that do not have extensions, use:
find . ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
The find . command looks for all files in or below the current directory. The argument -name '*.*' would restrict that search to files that have extensions. By preceding it with a not (!), however, we get all files that do not have an extension. Then, -exec cp -t /your/destination/dir/ {} + tells find to copy those files to the destination.
To do the above starting in your directory with the long name, use:
find "/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk/System/Library/PrivateFrameworks" ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
UPDATE: The unix tag on this question has been removed and replaced with a OSX tag. That means we can't use the -t option on cp. The workaround is:
find . ! -name '*.*' -exec cp {} /your/destination/dir/ \;
This is less efficient because a new cp process is created for every file moved instead of once for all the files that fit on a command line. But, it will accomplish the same thing.
MORE: There are two variations of the -exec clause of a find command. In the first use above, the clause ended with {} + which tells find to fill up the end of command line with as many file names as will fit on the line.
Since OSX lacks cp -t, however, we have to put the file name in the middle of the command. So, we put {} where we want the file name and then, to signal to find where the end of the exec command is, we add a semicolon. There is a trick, though. Because bash would normally consume the semicolon itself rather than pass it on to find, we have to escape the semicolon with a backslash. That way bash gives it to the find command.
sh SCRIPT.sh copy-from-directory .extension copy-to-directory
FROM_DIR=$1
EXTENSION=$2
TO_DIR=$3
USAGE="""Usage: sh SCRIPT.sh copy-from-directory .extension copy-to-directory
- EXAMPLE: sh SCRIPT.sh PrivateFrameworks .framework .
- NOTE: 'copy-to-directory' argument is optional
"""
## print usage if less than 2 args
if [[ $# < 2 ]]; then echo "${USAGE}" && exit 1 ; fi
## set copy-to-dir default args
if [[ -z "$TO_DIR" ]] ; then TO_DIR=$PWD ; fi
## DO SOMETHING...
## find directories; find target file;
## copy target file to copy-to-dir if file exist
find $FROM_DIR -type d | while read DIR ; do
FILE_TO_COPY=$(echo $DIR | xargs basename | sed "s/$EXTENSION//")
if [[ -f $DIR/$FILE_TO_COPY ]] ; then
cp $DIR/$FILE_TO_COPY $TO_DIR
fi
done
I've folder and file structure like
Folder/1/fileNameOne.ext
Folder/2/fileNameTwo.ext
Folder/3/fileNameThree.ext
...
How can I rename the files such that the output becomes
Folder/1_fileNameOne.ext
Folder/2_fileNameTwo.ext
Folder/3_fileNameThree.ext
...
How can this be achieved in linux shell?
How many different ways do you want to do it?
If the names contain no spaces or newlines or other problematic characters, and the intermediate directories are always single digits, and if you have the list of the files to be renamed in a file file.list with one name per line, then one of many possible ways to do the renaming is:
sed 's%\(.*\)/\([0-9]\)/\(.*\)%mv \1/\2/\3 \1/\2_\3%' file.list | sh -x
You'd avoid running the command through the shell until you're sure it will do what you want; just look at the generated script until its right.
There is also a command called rename — unfortunately, there are several implementations, not all equally powerful. If you've got the one based on Perl (using a Perl regex to map the old name to the new name) you'd be able to use:
rename 's%/(\d)/%/${1}_%' $(< file.list)
Use a loop as follows:
while IFS= read -d $'\0' -r line
do
mv "$line" "${line%/*}_${line##*/}"
done < <(find Folder -type f -print0)
This method handle spaces, newlines and other special characters in the file names and the intermediate directories don't necessarily have to be single digits.
This may work if the name is always the same, ie "file":
for i in {1..3};
do
mv $i/file ${i}_file
done
If you have more dirs on a number range, change {1..3} for {x..y}.
I use ${i}_file instead of $i_file because it would consider $i_file a variable of name i_file, while we just want i to be the variable and file and text attached to it.
This solution from AskUbuntu worked for me.
Here is a bash script that does that:
Note: This script does not work if any of the file names contain spaces.
#! /bin/bash
# Only go through the directories in the current directory.
for dir in $(find ./ -type d)
do
# Remove the first two characters.
# Initially, $dir = "./directory_name".
# After this step, $dir = "directory_name".
dir="${dir:2}"
# Skip if $dir is empty. Only happens when $dir = "./" initially.
if [ ! $dir ]
then
continue
fi
# Go through all the files in the directory.
for file in $(ls -d $dir/*)
do
# Replace / with _
# For example, if $file = "dir/filename", then $new_file = "dir_filename"
# where $dir = dir
new_file="${file/\//_}"
# Move the file.
mv $file $new_file
done
# Remove the directory.
rm -rf $dir
done
Copy-paste the script in a file.
Make it executable using
chmod +x file_name
Move the script to the destination directory. In your case this should be inside Folder/.
Run the script using ./file_name.
I'm looping through certain files (all files starting with MOVIE) in a folder with this bash script code:
for i in MY-FOLDER/MOVIE*
do
which works fine when there are files in the folder. But when there aren't any, it somehow goes on with one file which it thinks is named MY-FOLDER/MOVIE*.
How can I avoid it to enter the things after
do
if there aren't any files in the folder?
With the nullglob option.
$ shopt -s nullglob
$ for i in zzz* ; do echo "$i" ; done
$
for i in $(find MY-FOLDER/MOVIE -type f); do
echo $i
done
The find utility is one of the Swiss Army knives of linux. It starts at the directory you give it and finds all files in all subdirectories, according to the options you give it.
-type f will find only regular files (not directories).
As I wrote it, the command will find files in subdirectories as well; you can prevent that by adding -maxdepth 1
Edit, 8 years later (thanks for the comment, #tadman!)
You can avoid the loop altogether with
find . -type f -exec echo "{}" \;
This tells find to echo the name of each file by substituting its name for {}. The escaped semicolon is necessary to terminate the command that's passed to -exec.
for file in MY-FOLDER/MOVIE*
do
# Skip if not a file
test -f "$file" || continue
# Now you know it's a file.
...
done