Is there a way to prepend file names based on respective folders? - bash

I have a few folders with word document files of my notes. I want to run a simple bash loop help organise the names.
The current layout is:
data
folder1
1.jpg
2.jpg
folder2
1.jpg
.. etc
I want to rename all the jpg files so that it is folderX_1.jpg.
Can it be done only writing one loop?

Try this:
#!/bin/bash
origdir=$(pwd)
rm -fr source
mkdir source
cd source
mkdir folder1
touch folder1/1.jpg
touch folder1/2.jpg
mkdir folder2
touch folder2/1.jpg
cd ..
find source -type f -print0 | while IFS= read -r -d '' onefile
do
dirname=$(dirname "$onefile" | tr '/' '-')
filename=$(basename "$onefile")
cp "$onefile" "$origdir"/"$dirname"-"$filename"
done
The first part is just to create a test directory / file structure, remove or modify as required.
The find ... | while ... loops through each file in the directory / sub-directory structure.
for each file, extract the directory name (replacing '/' by '-' for sub-directories), extract the filename and copy the file.
if you want to move instead of copy, change cp by mv

Related

How to write a bash script to copy files from one base to another base location

I have a bash script I'm trying to write
I have 2 base directories:
./tmp/serve/
./src/
I want to go through all the directories in ./tmp and copy the *.html files into the same folder path in ./src
i.e
if I have a html file in ./tmp/serve/app/components/help/ help.html -->
copy to ./src/app/components/help/ And recursively do this for all subdirectories in ./tmp/
NOTE: the folder structures should exist so just need to copy them only. If it doesn't then hopefully it could create the folder for me (not what I want) but with GIT I can track these folders to manually handle those loose html files.
I got as far as
echo $(find . -name "*.html")\n
But not sure how to actually extract the file path with pwd and do what I need to, maybe it's not a one liner and needs to be done with some vars.
something like
for i in `echo $(find /tmp/ -name "*.html")\n
do
cp -r $i /src/app/components/help/
done
going so far to create the directories would take some more time for me.
I'll try to do it on my own and see if I come up with something
but for argument sake if you do run pwd and get a response the pseudo code for that:
pwd
get response
if that directory does not exist in src create that directory
copy all the original directories contents into the new folder at /src/$newfolder
(possibly running two for loops, one to check the directory tree, and then one to go through each original directory, copying all the html files)
You process substitution to loop the output from your find command and create the destination directory(ies) and then copy the file(s):
#!/bin/bash
# accept first parameters to script as src_dir and dest values or
# simply use default values if no parameter(s) passed
src_dir=${1:-/tmp/serve}
dest=${2-src}
while read -r orig_path ; do
# To replace the first occurrence of a pattern with a given string,
# use ${parameter/pattern/string}
dest_path="${orig_path/tmp\/serve/${dest}}"
# Use dirname to remove the filename from the destination path
# and create the destination directory.
dest_dir=$(dirname "${dest_path}")
mkdir -p "${dest_dir}"
cp "${orig_path}" "${dest_path}"
done < <(find "${src_dir}" -name '*.html')
This script copy .html files from src directory to des directory (create the subdirectory if they do not exist)
Find the files, then remove the src directory name and copy them into the destination directory.
#!/bin/bash
for i in `echo $(find src/ -name "*.html")`
do
file=$(echo $i | sed 's/src\///g')
cp -r --parents $i des
done
Not sure if you must use bash constructs or not, but here is a GNU tar solution (if you use GNU tar), which IMHO is the best way to handle this situation because all the metadata for the files (permissions, etc.) are preserved:
find ./tmp/serve -name '*.html' -type f -print0 | tar --null -T - -c | tar -x -v -C ./src --strip-components=3
This finds all the .html files (-type f) in the ./tmp/serve directory and prints them nul-terminated (-print0), then sends these filenames via stdin to tar as nul-terminated literals (--null) for inclusion (-T -), creating (-c) an archive which is then sent to another tar instance which extracts (-x) the archive printing its contents along the way (optional: -v), changing directory to the destination (-C ./src) before commencing and stripping (--strip-components=3) the ./tmp/serve/ prefix from the files. (You could also cd ./tmp/serve beforehand, using find . instead, and change -C to ../../src.)

Bash to rename files to append folder name

In folders and subfolders, I have a bunch of images named by date. I'm trying to come up with a script to look into a folder/subfolders and rename all jpg files to add the folder name.
Example:
/Desktop/Trip 1/200512 1.jpg
/Desktop/Trip 1/200512 2.jpg
would became:
/Desktop/Trip 1/Trip 1 200512 1.jpg
/Desktop/Trip 1/Trip 1 200512 2.jpg
I tried tweaking this script but I can't figure out how to get it to add the new part. I also don't know how to get it to work on subfolders.
#!/bin/bash
# Ignore case, i.e. process *.JPG and *.jpg
shopt -s nocaseglob
shopt -s nullglob
cd ~/Desktop/t/
# Get last part of directory name
here=$(pwd)
dir=${here/*\//}
i=1
for image in *.JPG
do
echo mv "$image" "${dir}${name}.jpg"
((i++))
done
Using find with the -iname option for a case insensitive match and a small script to loop over the images:
find /Desktop -iname '*.jpg' -exec sh -c '
for img; do
parentdir=${img%/*} # leave the parent dir (remove the last `/` and filename)
dirname=${parentdir##*/} # leave the parent directory name (remove all parent paths `*/`)
echo mv -i "$img" "$parentdir/$dirname ${img##*/}"
done
' sh {} +
This extracts the parent path for each image path (like the dirname command) and the directory name (like basename) and constructs a new output filename with the parent directory name before the image filename.
Remove the echo if the output looks as expected.
Try the following in your for loop. Note that '' is used to that the script can deal with spaces in the file names.
for image in "$search_dir"*.JPG
do
echo mv "'$image'" "'${dir} ${image}'"
done

Moving files from one directory to another preserving subdirectories

I have directory with content (example)
/dir1/a/b/c/file1
/dir1/a/b/c/file2
/dir1/a/d/file3
/dir1/a/e/file4
/dir1/f/dir3/
/dir1/f/dir4/
...
I have list of files and directories, which can be removed - for example file1,file3 and dir3
I would like to move(move, not copy nor tar them - files are large and i need to do it in short time) them to another directory /dir2 (on the same filesystem), but - preserving subdirectories:
/dir1/a/b/c/file1 -> /dir2/a/b/c/file1
/dir1/a/d/file3 -> /dir2/a/d/file3
/dir1/f/dir3/ -> /dir2/f/dir3/
Is there any better way than for each file and directory(for directories skipping last part) create directory in dir2(using mkdir -p/install -d) and then moving it into?
one of simplest solutions is using rsync, with list of files in
--include-from, and with --remove-source-files. But - it copy files, and then remove then - i need to avoid copying - for large files it
take too much time.
If you are comfortable with rsync, you can use it just to list the files and then process that list with this short shell script:
cd dir1
rsync --files-from list --list-only --no-implied-dirs . / |
while read mode size date time path
do
dest=$dir2/`dirname $path` # $dir2 must be an absolute path
mkdirhier $dest
mv $path $dest
done
I have tried this code with example you mentioned above and it worked okay.. Please test it before you use it. In second line, you have to put all file names in a plain text file and provide it's path. My file contents are shown below
#!/bin/ksh
c_file="Path_to_the_file_containing_list_for_movement"
while IFS= read v_line
do
v_fullfilepath=$(find $1 -name "$v_line")
v_dirname=$(dirname $v_fullfilepath)
v_target_path=${v_dirname/$1\//$2/}
mkdir -p "$v_target_path"
mv $v_fullfilepath $v_target_path
#echo $v_line " " $v_fullfilepath " " $v_dirname " " $v_target_path
done <"$c_file"
This was my file contents,
file1
file3
dir3

Unix - batch file to unzip folder with specific name

I need a batch script for unix but I don't know it very well.
I have folder A and his subfolder
A\a1\b\c\file.zip
A\a2\b\c\otherFile.zip
A\a3\b\c\thirdFile.zip
Each zip file contains a xml file and a text file
The script have to do 2 things:
unzip all the zip files that are in all folder named 'c' of all sub
folder of 'A' ; the unzipped files should stay in the same folder in
which was the zip
all the unzipped files that have xml extension have to been renamed
someone can help me?
Thank you very much
You can do like this.
#find the folder 'c' and unzip all zip files
for folder in `find ./A -name c -type d`; do unzip $folder/data.zip -d $foler; done
#find all .xml files and change the extension to .edefg
for file in `find ./A -name *.xml -type f`; do mv "$file" "${file%.xml}.edefg"; done
You must go to all dir's.
Would be something like
find A -type d -name c | while read dir; do
cd ${dir} || continue
unzip -u *.zip
call_rename_xml_function
cd - # not needed in bash where this code is performed in a subshell
done
EDIT: Added -u flag for when the script is called twice.

unix command to copy all .h file with modified directory-structure

Suppose I have a folder structure like:
Libraries\
UIToolkit\
files\
toolkit.h
toolkit.c
toolkit.resource
NetworkLayer\
files\
network.h
network-info.txt
...
I need a command so that I can input the Libraries folder and specify a Output folder then in the Output folder I have:
Output\
UIToolkit\
toolkit.h
NetworkLayer\
network.h
Basically it:
copies all .h file and preserves the folder structure
also move all the headers to its sub-libraries' root folders no matter how deep they are in the sub-libraries sub-folders.
I was using rsync but it does not do 2nd step, so I guess I need some quick and dirty modification?
Thanks a lot!
A bit modified answer based on devnull's:
idir=$1
odir=$2
while read -r f; do
subdir=${f#$idir}
subdir=${subdir%%/*}
mkdir -p $odir/$subdir
cp -a $f $odir/$subdir
done < <(find $idir -type f -name "*.h")
call something like
./thisscript.sh Libraries Output
shall be able to work with absolute or relative directories, IMHO; but won't handle if .h file is right under Libraries (must be at least one subdir level down..).
You can say:
cd /path/to/Libraries
while read -r file; do
odir=$(cut -d'/' -f2 <<< ${file});
fname=$(basename ${file});
cp "$file" "/path/to/Output/${odir}/${fname}";
done < <(find . -type f -name "*.h")
This would copy all the *.h files to the Output folder as per the desired directory structure.

Resources