Add prefix of root folder to all files in directory/subdirectories - file-rename

So I have a bunch of directories that are kind of like this (but with more files):
1-0
file1
file2
folder
file 3
1-1
file1
file2
folder
file 3
etc.
And as the question asks I want to prefix the 1-0, 1-1 to all the "files" in the directories and subdirectories of the root folder being the 1-0 etc. Like follows:
1-0
1-0file1
1-0file2
folder
1-0file 3
1-1
1-1file1
1-1file2
folder
1-1file 3
etc.
I've tried a few cmd/batch solutions from other questions and ReNamer but I couldn't find anything that quite did what I wanted. Pretty inexperienced with this kind of stuff so I could have missed something for sure.
ReNamer I can do it but I have to do each directory individually and I have a fair few that I am trying to rename so it's a bit impractical.

I would find this in general a quite terrible idea to do... I cannot imagine a situation where on the long term this may really has been a good idea. One of the reasons is that it just duplicates structural information of the location in the filename itself. Why ?...
On a linux shell there are certainly various ways to accomplish this. Here is a one liner:
find . -type f -name "*" | xargs -n 1 -I {} sh -c 'file_full={}; file_bare=${file_full#./}; file=${file_bare##*/} prefix=${file_bare%%/*}; dir=${file_bare%/*}; echo mv ${file_full} ./${dir}/${prefix}${file}'
Note, there is an echo in the final output. Remove the echo only if you are really happy with the solution and want this to be executed.

Related

How to copy unique files to each directory using shell scripting

I have lot of files in source directory with different name and ending with some number as suffix name. I need to copy all the files into 3 different directory. While copying, each file should be unique to other directory.
Example :
Source directory
1) Test01.csv
2) Test02.csv
3) Test03.csv
4) Nontesting01.csv
5) Nontesting02.csv
6) Nontesting03.csv
Destination directory
Directory 1 : Test01.csv
Nontesting01.csv
Directory 2 : Test02.csv
Nontesting02.csv
Directory 3 : Test03.csv
Nontesting03.csv
I have tried below code but it's copying 1 file per directory.
#!/bin/bash
dest=/Users/myprofile/Testing_
count=0
for f in *.csv ; do (cp $f/*.csv* "${dest}"${count}/$f ) ;
((count++))
done
Could someone help how to achieve this scenario using shell scripting.
Piggy-backing on what Jetchisel said, the code should be adapted, as a loop on patterns, to look like this:
#!/bin/bash
destPref="/Users/myprofile/Testing_"
for pattern in 01 02 03
do
files=(*${pattern}.csv)
cp -v -- "${files[#]}" "${destPref}${pattern}/"
done
As a general guidance, if you start by writing the code logic
in pseudo-code,
fine-grained for each task that you are trying to accomplish,
in the correct sequence and
in the correct context,
then having that worded so that it does exactly what you want it to do ... will, almost explicitly, tell you WHAT you need to code for each of those, not the HOW. The HOW is the nitty gritty of coding.
If you give that a try, the solution will almost pop out of the page at you.
Good luck with your apprenticeship!

Bash script to find file older than X days, then subsequently delete it, and any files with the same base name?

I am trying to figure out a way to search a directory for a file older than 365 days. If it finds a match, I'd like it to both delete the file and locate any other files in the directory that have the same basename, and delete those as well.
File name examples: 12345.pdf (Search for) then delete, 12345_a.pdf, 12345_xyz.pdf (delete if exist).
Thanks! I am very new to BASH scripting, so patience is appreciated ;-))
I doubt this can be done cleanly in a single pass.
Your best bet is to use -mtime or a variant to collect names and then use another find command to delete files matching those names.
UPDATE
With respect to your comment, I mean something like:
# find basenames of old files
find .... -printf '%f\n' | sort -u > oldfiles
for file in ($<oldfiles); do find . -name $file -exec rm; done

Bash find in current folder and #name# sub-folder recursively

Tricky question for a bash noob like me, but i'm sure this this easier that it seems to me.
I'm currently using the find command as follows :
run "find #{current_release}/migration/ -name '*.sql'| sort -n | xargs cat >#{current_release}/#{stamp}.sql"
in my capistrano recipe.
Problem is #{current_release}/migration/ contains subfolders, and I'd like the find command to include only one of these, depending on it's name (that I know, it's based on the target environment.
As a recap, folder structure is
Folder
|- sub1
|- sub2
and i'm trying to make a find specifying to recurse ONLY on sub1 for example. I'm sure this is possible, just couldn't find how.
Thanks.
Simply specify the directory you want as argument to find, e.g. find #{current_release}/migration/sub1 ....
EDIT: As per your clarification, you should use the -maxdepth argument for find, to limit the recursion depth. So, for example, you can use find firstdir firstdir/sub1 -maxdepth 1.
You just need to append that to your find invocation:
find #{current_release}/migration/sub_you_want -name ...
Depending on how you make the determination of the sub-directory you want, you should be able to script that as well.

batch rename files and folders at once

I got help regarding the following question:
batch rename files with ids intact
It's a great example of how to rename specific files in a group, but I am wondering if there is a similar script I could use to do the following:
I have a group of nested folders and files within a root directory that contain [myprefix_foldername] and [myprefix_filename.ext]
I would like to rename all of the folders and files to [foldername] and [filename.ext]
Can I use a similar methodology to what is found in the post above?
Thanks!
jml
Yes, quite easily, with find.
find rootDir -name "myprefix_*"
This will give you a list of all files and folders in rootDir that start with myprefix_. From there, it's a short jump to a batch rename:
find rootDir -name "myprefix_*" | while read f
do
echo "Moving $f to ${f/myprefix_/}"
mv "$f" "${f/myprefix_/}"
done
EDIT: IFS added per http://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
EDIT 2: IFS removed in favor of while read.
EDIT 3: As bos points out, you may need to change while read f to while read -d $'\n' f if your version of Bash still doesn't like it.

Script to copy files on CD and not on hard disk to a new directory

I need to copy files from a set of CDs that have a lot of duplicate content, with each other, and with what's already on my hard disk. The file names of identical files are not the same, and are in sub-directories of different names. I want to copy non-duplicate files from the CD into a new directory on the hard disk. I don't care about the sub-directories - I will sort it out later - I just want the unique files.
I can't find software to do that - see my post at SuperUser https://superuser.com/questions/129944/software-to-copy-non-duplicate-files-from-cd-dvd
Someone at SuperUser suggested I write a script using GNU's "find" and the Win32 version of some checksum tools. I glanced at that, and have not done anything like that before. I'm hoping something exists that I can modify.
I found a good program to delete duplicates, Duplicate Cleaner (it compares checksums), but it won't help me here, as I'd have to copy all the CDs to disk, and each is probably about 80% duplicates, and I don't have room to do that - I'd have to cycle through a few at a time copying everything, then turning around and deleting 80% of it, working the hard drive a lot.
Thanks for any help.
I don't use Windows, but I'll give a suggestion: a combination of GNU find and a Lua script. For find you can try
find / -exec md5sum '{}' ';'
If your GNU software includes xargs the following will be equivalent but may be significantly faster:
find / -print0 | xargs -0 md5sum
This will give you a list of checksums and corresponding filenames. We'll throw away the filenames and keep the checksums:
#!/usr/bin/env lua
local checksums = {}
for l in io.lines() do
local checksum, pathname = l:match('^(%S+)%s+(.*)$')
checksums[checksum] = true
end
local cdfiles = assert(io.popen('find e:/ -print0 | xargs -0 md5sum'))
for l in cdfiles:lines() do
local checksum, pathname = l:match('^(%S+)%s+(.*)$')
if not checksums[checksum] then
io.stderr:write('copying file ', pathname, '\n')
os.execute('cp ' .. pathname .. ' c:/files/from/cd')
checksums[checksum] = true
end
end
You can then pipe the output from
find / -print0 | xargs -0 md5um
into this script.
There are a few problems:
If the filename has special characters, it will need to be quoted. I don't know the quoting conventions on Windows.
It would more efficient to write the checksums to disk rather than to run find all the time. You could try
local csums = assert(io.open('/tmp/checksums', 'w'))
for cs in pairs(checksums) do csums:write(cs, '\n') end
csums:close()
And then read checksums back in from the file using io.lines again.
I hope this is enough to get you started. You can download Lua from http://lua.org, and I recommend the superb book Programming in Lua (check out the previous edition free online).

Resources