recursively rename the files with folder name - shell

Im trying to rename the files in the folder recursively using
find . -iname "*bw" -exec rename /accepted_hits.bw .bw '{}' \;
I want to change the file name in each folder with the respective folder name
for eg. test1/accepted_hits.bw to test1.bw

try this command:
find . -iname "accepted_hits.bw" | while read file; do dir=$(basename $(dirname $file)); mv $file ${dir}.bw; done
where:
find command search recursively file whose name is "accepted_hits.bw"
-i option means case insensitive
symbol | means that the output of find command is the input for while command
while read file loop on each line and execute commands between do and done
find command will output:
./a/accepted_hits.bw
./b/accepted_hits.bw
so block of command will execute two times. dirname $file on the first line gets ./a and dir=$(basename $(dirname $file)) is the name of directory, in this example a.
mv command move file accepted_hits.bw in . directory and rename it as ${dir}.bw
Use man command to view manual page of each command.

Related

Append file name with source folder using the FIND command

I need to strip files out of a number of directories that all have the same file name a.txt. The difference comes from the parent folder so
example1\a.txt
example2\a.txt
...
so I am hoping to run a FIND command that will capture a.txt but not overwrite the file as it moves from folder to folder.
so the output would be
example1_a.txt
example2_a.txt
So from another post the FIND command I want is the following
find . -name "a.txt" -execdir echo cp -v {} /path/to/dest/ \;
So I want to modify in some way to append the source folder to the file. so my guess is to manipulate {} somehow to do it.
Thanks in advance
A one liner might be possible, but you could use this:
#!/bin/bash
targetprefix="targetdir"
find . -name "a.txt" -print0 | while read -r -d '' line
do
path=$(dirname "$line")
newpath=$(echo "${path#./}" | tr '/' '_')
target="$targetprefix/$newpath"
filename=$(basename "$line")
cp -v $line $target/$filename
done
change variable "targetprefix" to be the destination directory you desire.
this find with -print0 and while comes from https://mywiki.wooledge.org/BashFAQ/001
since results from find all start with "./", I use "${path#./}" to remove that prefix.
the tr replaces all subsequent "/" with an underscore. This will take care of sub directories
WARNING: I did not test all "weird" directory and filename formats for proper execution (like carriage returns in filenames!).

Archive old files to different files

In past I have used following command to archive old files to one file
find . -mtime -1 | xargs tar -cvzf archive.tar
Now suppose we have 20 directories I need to make a script that goes in each directory and archives all files to different files which have same name as original file?
So suppose if I have following files in one directory named /Home/basic/
and this directory has following files:
first_file.txt
second_file.txt
third_file.txt
Now after I am done running the script I need output as follows:
first_file_05112014.tar
second_file_05112014.tar
third_file_05112014.tar
Use:
find . -type f -mtime -1 | xargs -I file tar -cvzf file.tar.gz file
I added .gz to indicate its zipped as well.
From man xargs:
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names
read from standard input. Also, unquoted blanks do not terminate input
items; instead the separator is the newline character. Implies -x and -L 1.
The find command will produce a list of filepaths. -L 1 means that each whole line will serve as input to the command.
-I file will assign the filepath to file and then each occurrence of file in the tar command line will be replaced by its value, that is, the filepath.
So, for ex, if find produces a filepath ./somedir/abc.txt, the corresponding tar command will look like:
tar -czvf ./somedir/abc.txt.tar.gz ./somedir/abc.txt
which is what is desired. And this will happen for each filepath.
What about this shell script?
#!/bin/sh
mkdir /tmp/junk #Easy for me to clean up!
for p in `find . -mtime -1 -type f`
do
dir=`dirname "$p"`
file=`basename "$p"`
tar cvf /tmp/junk/${file}.tar $p
done
It uses the basename command to extract the name of the file and the dirname command to extract the name of the directory. I don't actually use the directory but I left it in there in case you might find it handy.
I put all the tar files in one place so I could delete them easily but you could easily substitute $P instead of $file if you wanted them in the same directory.

Bash script copying certain type of file to another location

I was thinking if using a BASH script is possible without manually copying each file that is in this parent directory
"/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk
/System/Library/PrivateFrameworks"
So in this folder PrivateFrameworks, there are many subfolders and in each subfolder it consists of the file that I would like to copy it out to another location. So the structure of the path looks like this:
-PrivateFrameworks
-AccessibilityUI.framework
-AccessibilityUI <- copy this
-AccountSettings.framework
-AccountSettings <- copy this
I do not want the option of copying the entire content in the folder as there might be cases where the folders contain files which I do not want to copy. So the only way I thought of is to copy by the file extension. However as you can see, the files which I specified for copying does not have an extension(I think?). I am new to bash scripting so I am not familiar if this can be done with it.
To copy all files in or below the current directory that do not have extensions, use:
find . ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
The find . command looks for all files in or below the current directory. The argument -name '*.*' would restrict that search to files that have extensions. By preceding it with a not (!), however, we get all files that do not have an extension. Then, -exec cp -t /your/destination/dir/ {} + tells find to copy those files to the destination.
To do the above starting in your directory with the long name, use:
find "/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS7.0.sdk/System/Library/PrivateFrameworks" ! -name '*.*' -exec cp -t /your/destination/dir/ {} +
UPDATE: The unix tag on this question has been removed and replaced with a OSX tag. That means we can't use the -t option on cp. The workaround is:
find . ! -name '*.*' -exec cp {} /your/destination/dir/ \;
This is less efficient because a new cp process is created for every file moved instead of once for all the files that fit on a command line. But, it will accomplish the same thing.
MORE: There are two variations of the -exec clause of a find command. In the first use above, the clause ended with {} + which tells find to fill up the end of command line with as many file names as will fit on the line.
Since OSX lacks cp -t, however, we have to put the file name in the middle of the command. So, we put {} where we want the file name and then, to signal to find where the end of the exec command is, we add a semicolon. There is a trick, though. Because bash would normally consume the semicolon itself rather than pass it on to find, we have to escape the semicolon with a backslash. That way bash gives it to the find command.
sh SCRIPT.sh copy-from-directory .extension copy-to-directory
FROM_DIR=$1
EXTENSION=$2
TO_DIR=$3
USAGE="""Usage: sh SCRIPT.sh copy-from-directory .extension copy-to-directory
- EXAMPLE: sh SCRIPT.sh PrivateFrameworks .framework .
- NOTE: 'copy-to-directory' argument is optional
"""
## print usage if less than 2 args
if [[ $# < 2 ]]; then echo "${USAGE}" && exit 1 ; fi
## set copy-to-dir default args
if [[ -z "$TO_DIR" ]] ; then TO_DIR=$PWD ; fi
## DO SOMETHING...
## find directories; find target file;
## copy target file to copy-to-dir if file exist
find $FROM_DIR -type d | while read DIR ; do
FILE_TO_COPY=$(echo $DIR | xargs basename | sed "s/$EXTENSION//")
if [[ -f $DIR/$FILE_TO_COPY ]] ; then
cp $DIR/$FILE_TO_COPY $TO_DIR
fi
done

command line find first file in a directory

My directory structure is as follows
Directory1\file1.jpg
\file2.jpg
\file3.jpg
Directory2\anotherfile1.jpg
\anotherfile2.jpg
\anotherfile3.jpg
Directory3\yetanotherfile1.jpg
\yetanotherfile2.jpg
\yetanotherfile3.jpg
I'm trying to use the command line in a bash shell on ubuntu to take the first file from each directory and rename it to the directory name and move it up one level so it sits alongside the directory.
In the above example:
file1.jpg would be renamed to Directory1.jpg and placed alongside the folder Directory1
anotherfile1.jpg would be renamed to Directory2.jpg and placed alongside the folder Directory2
yetanotherfile1.jpg would be renamed to Directory3.jpg and placed alongside the folder Directory3
I've tried using:
find . -name "*.jpg"
but it does not list the files in sequential order (I need the first file).
This line:
find . -name "*.jpg" -type f -exec ls "{}" +;
lists the files in the correct order but how do I pick just the first file in each directory and move it up one level?
Any help would be appreciated!
Edit: When I refer to the first file what I mean is each jpg is numbered from 0 to however many files in that folder - for example: file1, file2...... file34, file35 etc... Another thing to mention is the format of the files is random, so the numbering might start at 0 or 1a or 1b etc...
You can go inside each dir and run:
$ mv `ls | head -n 1` ..
If first means whatever the shell glob finds first (lexical, but probably affected by LC_COLLATE), then this should work:
for dir in */; do
for file in "$dir"*.jpg; do
echo mv "$file" "${file%/*}.jpg" # If it does what you want, remove the echo
break 1
done
done
Proof of concept:
$ mkdir dir{1,2,3} && touch dir{1,2,3}/file{1,2,3}.jpg
$ for dir in */; do for file in "$dir"*.jpg; do echo mv "$file" "${file%/*}.jpg"; break 1; done; done
mv dir1/file1.jpg dir1.jpg
mv dir2/file1.jpg dir2.jpg
mv dir3/file1.jpg dir3.jpg
Look for all first level directories, identify first file in this directory and then move it one level up
find . -type d \! -name . -prune | while read d; do
f=$(ls $d | head -1)
mv $d/$f .
done
Building on the top answer, here is a general use bash function that simply returns the first path that resolves to a file within the given directory:
getFirstFile() {
for dir in "$1"; do
for file in "$dir"*; do
if [ -f "$file" ]; then
echo "$file"
break 1
fi
done
done
}
Usage:
# don't forget the trailing slash
getFirstFile ~/documents/
NOTE: it will silently return nothing if you pass it an invalid path.

How to use find to bundle files

I'm struggling with this task:
Write a script that takes as input a directory (path) name and a
filename base (such as ".", "*.txt", etc). The script shall search the
given directory tree, find all files matching the given filename, and
bundle them into a single file. Executing the given file as a script
should return the original files.
Can anyone help me?
First i tried to do the find part like this:
#!/bin/bash
filebase=$2
path=$1
find $path \( -name $base \)
Then i found this code for bundle, but I dont know how to combine them.
for i in $#; do
echo "echo unpacking file $i"
echo "cat > $i <<EOF"
cat $i
echo "EOF"
done
Going on tripleee's comment you can use shar to generate a self extracting archive.
You can take the output of find and pass it through to shar in order to generate the archive.
#!/bin/bash
path="$1"
filebase="$2"
archive="$3"
find "$path" -type f -name "$filebase" | xargs shar > "$archive"
The -type f option passed to find will restrict the search to files (i.e. excludes directories), which seems to be a required limitation.
If the above script is called archive_script.sh, and is executable, then you can call it as below for example:
./archive_script.sh /etc '*.txt' etc-text.shar
This will create a shar archive of all the .txt files in /etc.

Resources