Finding files iteratively in bash - bash

I am somehow failing to cp certain files iteratively to the relevant directories.
I have a directory ORIG and 3 DIRECTORIES G1 G2 and G3 where I want to use data from ORIG.
I have this:
i=1; for((i=1;i<=3;i++)); do;
cp ORIG/f$i'_'* G$i/;done
Why doesn't the star work so that I can get all the files that start with f1 to the directory G1/?

I think it should be:
for((i=1;i<=3;i++)); do
cp ORIG/f$i'_'* G$i/; done
Pay attention to the first line. You should not write a semicolon after do. You can omit the "i=1", because you're already doing this in the for-loop.
This will copy files of the form "ORIG/f$i_*" to "G$i".

Related

How to copy unique files to each directory using shell scripting

I have lot of files in source directory with different name and ending with some number as suffix name. I need to copy all the files into 3 different directory. While copying, each file should be unique to other directory.
Example :
Source directory
1) Test01.csv
2) Test02.csv
3) Test03.csv
4) Nontesting01.csv
5) Nontesting02.csv
6) Nontesting03.csv
Destination directory
Directory 1 : Test01.csv
Nontesting01.csv
Directory 2 : Test02.csv
Nontesting02.csv
Directory 3 : Test03.csv
Nontesting03.csv
I have tried below code but it's copying 1 file per directory.
#!/bin/bash
dest=/Users/myprofile/Testing_
count=0
for f in *.csv ; do (cp $f/*.csv* "${dest}"${count}/$f ) ;
((count++))
done
Could someone help how to achieve this scenario using shell scripting.
Piggy-backing on what Jetchisel said, the code should be adapted, as a loop on patterns, to look like this:
#!/bin/bash
destPref="/Users/myprofile/Testing_"
for pattern in 01 02 03
do
files=(*${pattern}.csv)
cp -v -- "${files[#]}" "${destPref}${pattern}/"
done
As a general guidance, if you start by writing the code logic
in pseudo-code,
fine-grained for each task that you are trying to accomplish,
in the correct sequence and
in the correct context,
then having that worded so that it does exactly what you want it to do ... will, almost explicitly, tell you WHAT you need to code for each of those, not the HOW. The HOW is the nitty gritty of coding.
If you give that a try, the solution will almost pop out of the page at you.
Good luck with your apprenticeship!

Mac OS - Batch Rename All Files in Folder but Disregard All SubFolders

I have a bunch of folders that I would like to rename all the files contained within minus any subdfolders.
For example lets say I have two parent folders:
ParentFolder1 - [PF1]
ParentFolder2 - [PF2]
Each parent folder has various amounts of subfolders:
SubParentFolder1_1
SubParentFolder1_2
SubParentFolder2_1
Inside the ParentFolder and each SubParentFolder there can be files such as .mp3, .txt. etc. or more subfolders.
How would I go about renaming all and any files in this manner:
example.mp3 -> example - [PF1]
example.txt -> example - [PF2]
example.docx -> example - [PF2]
Appreciate any input!
This is a way to list files (not folders) in a range of directories and then do something with them... Specifics of renaming are up to you.
for FOLD in Parent*;
do for FILE in $(ls -p $FOLD | grep -v "/");
do echo "$FOLD/$FILE" "$FOLD/${FILE%.*}";
done; done;
Explanation:
For each folder (FOLD) in directories matching the wildcard Parent*,
list the contents, adding / to the end of directory names.
Do inverse grep on that list, leaving just the file names.
Loop through each FILE and echo out the original folder+file, followed by the folder and file with the suffix removed by patten matching.
Once you test this, you can replace echo with mv to do the actual renaming... (I've put these on separate lines to make them more readable, but would usually run this as one long command.

How to recursively rename all files and folder including specific part of the filename with Windows Bash?

This has to be a duplicate but I have read and tried at least a dozen of Q&As here on SO, and I cannot get any of them working for my case.
Really hope this won't result in downvotes because of it.
So I'm on Windows (10) and have a Bash terminal that I want to use for my task. The MINGW64 one I downloaded when I started working with Git.
I would prefer the solution with this program, but will be perfectly happy with one in Command Prompt Terminal or even PowerShell.
I created a TemplateApp which is in C:\Apps\TemplateApp folder which has multiple folders and subfolders named TemplateApp or TemplateApp.something as well as a lot of files that have TemplateApp as a part of their name.
Could be:
TemplateApp.ext
TemplateApp.something.ext
something.TemplateApp.something.ext
Then I copied the uppermost folder to C:\Apps\TemplateApp - Copy and in turn renamed it to C:\Apps\ProductionApplication.
Now for the love of whomever, I cannot make any of the scripts I found on SO to work for my case, ie. to rename all the above mentioned files and folders by replacing TemplateApp with ProductionApplication.
Here is a bash function I wrote that I think does very much like what you are wanting to do.
function func_CreateSourceAndDestination() {
#
for (( i = 0 ; i < ${#files_syncSource[#]} ; i++ )) ; do
files_syncDestination[${i}]="${files_syncSource[${i}]#${directory_MusicLibraryRoot_source}}"
file_destinationPath="$( dirname -- "${directory_PMPRoot_destination}${files_syncDestination[${i}]}" )"
if [ ! -d "${file_destinationPath}" ] ; then
mkdir -p "${file_destinationPath}"
fi
rsync -rltDvPmz "${files_syncSource[${i}]}" "${directory_PMPRoot_destination}${files_syncDestination[${i}]}"
done
}
In my case I'm feeding into rsync for a source and a destination. I'm pulling all the file paths from an array that has been split into path segments. I have to make certain character substitutions for FAT and NTFS file systems. I do this recursively.
files_syncDestination[${i}]="${files_syncDestination[${i}]//\:/__}"
That's the magic. I load a new array with the character substituted. You could do the same with a loaded variable including your phrases for change.
files_syncDestination[${i}]="${files_syncDestination[${i}]//${targetPhrase}/${subPhrase}}"
After that change in the function, you could use rsync or cp or mv as you prefer to go from your source array to your destination array.
(The double-slash in the substitution makes the substitution global.)

How to find duplicate directories

Let create some testing directory tree:
#!/bin/bash
top="./testdir"
[[ -e "$top" ]] && { echo "$top already exists!" >&2; exit 1; }
mkfile() { printf "%s\n" $(basename "$1") > "$1"; }
mkdir -p "$top"/d1/d1{1,2}
mkdir -p "$top"/d2/d1some/d12copy
mkfile "$top/d1/d12/a"
mkfile "$top/d1/d12/b"
mkfile "$top/d2/d1some/d12copy/a"
mkfile "$top/d2/d1some/d12copy/b"
mkfile "$top/d2/x"
mkfile "$top/z"
The structure is: find testdir \( -type d -printf "%p/\n" , -type f -print \)
testdir/
testdir/d1/
testdir/d1/d11/
testdir/d1/d12/
testdir/d1/d12/a
testdir/d1/d12/b
testdir/d2/
testdir/d2/d1some/
testdir/d2/d1some/d12copy/
testdir/d2/d1some/d12copy/a
testdir/d2/d1some/d12copy/b
testdir/d2/x
testdir/z
I need find the duplicate directories, but I need consider only files (e.g. I should ignore (sub)directories without files). So, from the above test-tree the wanted result is:
duplicate directories:
testdir/d1
testdir/d2/d1some
because in both (sub)trees are only two identical files a and b. (and several directories, without files).
Of course, I could md5deep -Zr ., also could walk the whole tree using perl script (using File::Find+Digest::MD5 or using Path::Tiny or like.) and calculate the file's md5-digests, but this doesn't helps for finding the duplicate directories... :(
Any idea how to do this? Honestly, I haven't any idea.
EDIT
I don't need working code. (I'm able to code myself)
I "just" need some ideas "how to approach" the solution of the problem. :)
Edit2
The rationale behind - why need this: I have approx 2.5 TB data copied from many external HDD's as a result of wrong backup-strategy. E.g. over the years, the whole $HOME dirs are copied into (many different) external HDD's. Many sub-directories has the same content, but they're in different paths. So, now I trying to eliminate the same-content directories.
And I need do this by directories, because here are directories, which has some duplicates files, but not all. Let say:
/some/path/project1/a
/some/path/project1/b
and
/some/path/project2/a
/some/path/project2/x
e.g. the a is a duplicate file (not only the name, but by the content too) - but it is needed for the both projects. So i want keep the a in both directories - even if they're duplicate files. Therefore me looking for a "logic" how to find duplicate directories.
Some key points:
If I understand right (from your comment, where you said: "(Also, when me saying identical files I mean identical by their content, not by their name)" , you want find duplicate directories, e.g. where their content is exactly the same as in some other directory, regardless of the file-names.
for this you must calculate some checksum or digest for the files. Identical digest = identical file. (with great probability). :) As you already said, the md5deep -Zr -of /top/dir is a good starting point.
I added the -of, because for such job you don't want calculate the contents of the symlinks-targets, or other special files like fifo - just plain files.
calculating the md5 for each file in 2.5TB tree, sure will take few hours of work, unless you have very fast machine. The md5deep runs a thread for each cpu-core. So, while it runs, you can make some scripts.
Also, consider run the md5deep as sudo, because it could be frustrating if after a long run-time you will get some error-messages about unreadable files, only because you forgot to change the files-ownerships...(Just a note) :) :)
For the "how to":
For comparing "directories" you need calculate some "directory-digest", for easy compare and finding duplicates.
The one most important thing is realize the following key points:
you could exclude directories, where are files with unique digests. If the file is unique, e.g. has not any duplicates, that's mean that is pointless checking it's directory. Unique file in some directory means, that the directory is unique too. So, the script should ignore every directory where are files with unique MD5 digests (from the md5deep's output.)
You don't need calculate the "directory-digest" from the files itself. (as you trying it in your followup question). It is enough to calculate the "directory digest" using the already calculated md5 for the files, just must ensure that you sort them first!
e.g. for example if your directory /path/to/some containing only two files a and b and
if file "a" has md5 : 0cc175b9c0f1b6a831c399e269772661
and file "b" has md5: 92eb5ffee6ae2fec3ad71c777531578f
you can calculate the "directory-digest" from the above file-digests, e.g. using the Digest::MD5 you could do:
perl -MDigest::MD5=md5_hex -E 'say md5_hex(sort qw( 92eb5ffee6ae2fec3ad71c777531578f 0cc175b9c0f1b6a831c399e269772661))'
and will get 3bc22fb7aaebe9c8c5d7de312b876bb8 as your "directory-digest". The sort is crucial(!) here, because the same command, but without the sort:
perl -MDigest::MD5=md5_hex -E 'say md5_hex(qw( 92eb5ffee6ae2fec3ad71c777531578f 0cc175b9c0f1b6a831c399e269772661))'
produces: 3a13f2408f269db87ef0110a90e168ae.
Note, even if the above digests aren't the digests of your files, but they're will be unique for every directory with different files and will be the same for the identical files. (because identical files, has identical md5 file-digest). The sorting ensures, that you will calculate the digest always in the same order, e.g. if some other directory will contain two files
file "aaa" has md5 : 92eb5ffee6ae2fec3ad71c777531578f
file "bbb" has md5 : 0cc175b9c0f1b6a831c399e269772661
using the above sort and md5 you will again get: 3bc22fb7aaebe9c8c5d7de312b876bb8 - e.g. the directory containing same files as above...
So, in such way you can calculate some "directory-digest" for every directory you have and could be ensured that if you get another directory digest 3bc22fb7aaebe9c8c5d7de312b876bb8 thats means: this directory has exactly the above two files a and b (even if their names are different).
This method is fast, because you will calculate the "directory-digests" only from small 32bytes strings, so you avoids excessive multiple file-digest-caclulations.
The final part is easy now. Your final data should be in form:
3a13f2408f269db87ef0110a90e168ae /some/directory
16ea2389b5e62bc66b873e27072b0d20 /another/directory
3a13f2408f269db87ef0110a90e168ae /path/to/other/directory
so, from this is easy to get: the
/some/directory and the /path/to/other/directory are identical, because they has identical "directory-digests".
Hm... All the above is only a few lines long perl script. Probably would be faster to write here directly the perl-script as the above long textual answer - but, you said - you don't want code... :) :)
A traversal can identify directories which are duplicates in the sense you describe. I take it that this is: if all files in a directory are equal to all files of another then their paths are duplicates.
Find all files in each directory and form a string with their names. You can concatenate the names with a comma, say (or some other sequence that is certainly not in any names). This is to be compared. Prepend the path to this string, so to identify directories.
Comparison can be done for instance by populating a hash with keys being strings with filenames and path their values. Once you find that a key already exists you can check the content of files, and add the path to the list of duplicates.
The strings with path don't have to be actually formed, as you can build the hash and dupes list during the traversal. Having the full list first allows for other kinds of accounting, if desired.
This is altogether very little code to write.
An example. Let's say that you have
dir1/subdir1/{a,b} # duplicates (files 'a' and 'b' are considered equal)
dir2/subdir2/{a,b}
and
proj1/subproj1/{a,b,X} # NOT duplicates, since there are different files
proj2/subproj2/{a,b,Y}
The above prescription would give you strings
'dir1/subdir1/a,b',
'dir2/subdir2/a,b',
'proj1/subproj1/a,b,X',
'proj2/subproj2/a,b,Y';
where the (sub)string 'a,b' identifies dir1/subdir1 and dir2/subdir2 as duplicates.
I don't see how you can avoid a traversal to build a system that accounts for all files.
The procedure above is the first step, not handling directories with files and subdirectories.
Consider
dirA/ dirB/
a b sdA/ a X sdB/
c d c d
Here the paths dirA/sdA/ and dirB/sdB/ are duplicates by the problem description but the whole dirA/ and dirB/ are distinct. This isn't shown in the question but I'd expect it to be of interest.
The procedure from the first part can be modified for this. Iterate through directories, forming a path component at every step. Get all files in each, and all subdirectories (if none we are done). Append the comma-separated file list to the path component (/sdA/). So the representation of the above is
'dirA/sdA,a,b/c,d', 'dirB/sdB,a,X/c,d'
For each file-list substring (c,d) found to already exist we can check its path against the existing one, component by component. Now a hash with keys like c,d won't do since this example has the same file-list for distinct hierarchies, but a modified (or other) data structure is needed.
Finally, there may be more subdirectories parallel to sdA (say sdA2). We care only for its own path, but except for the parallel files (a,b, in that component of the path dirA/sdaA2,a,b/). So keep in mind all bottom-level file-lists (c,d) with their paths and, if file-lists are equal and paths are of same length, check whether their paths have a,b file-lists equal in each path component.
I don't know whether this is a workable solution for you, but I'd expect "near-duplicates" to be rare -- the backup is either a duplicate or not. So there may not be much need to handle futher edge-cases in complex sprawling hierarchies. This procedure should be at least a useful pre-selection mechanism, that would greatly reduce the need for further work.
This assumes that equal file-names very likely indicate equal files. A part of that is my expectation that if a file was even just renamed it still cannot be considered a duplicate. If this is not so this approach won't work and one would need something along the lines of the answer by jm666.
I make a tool which searches duplicate folders.
https://github.com/un1t/dirdups
dirdups testdir -i 1
-i 1 option consider folders as duplicates if they have at least 1 file in common. Without this option default value is 10.
In your case it will find the following directories:
testdir/d1/d12/
testdir/d2/d1some/d12copy/

Naming a file with a variable in a shell script

I'm writing a unix shell script that sorts data in ten subdirectories (labelled 1-10) of the home directory. In each subdirectory, the script needs to rename the files hehd.output and fort.hehd.time, as well as copy the file hehd.data to a .data file with a new name.
What I'd like it to do is rename each of these files in the following format:
AA.BB.CC
Where
AA = a variable in the hehd.data file within the subdirectory containing the file
BB = the name of the subdirectory containing the file (1-10)
CC = the original file name
Each subdirectory contains an hehd.data file, and each hehd.data file contains the string ij0=AA, where AA represents the variable I want to use to rename the files in the same subdirectory.
For example: When run, the script should search /home/4/hehd.data for the string ij0=2, then move /home/4/hehd.output to /home/4/2.4.hehd.output.
I'm currently using the grep command to have the script search for the string ij0=* and copy it to a new text file within the subdirectory. Next, the string ij0= is deleted from the text file, and then its contents are used to rename all target files in the same subdirectory. The last line of the shell script deletes the text file.
I'm looking for a better way to accomplish this, preferably such that all ten subdirectories can be sorted at once by the same script. My script seems incredibly inefficient, and doesn't do everything that I want it to by itself.
How can I improve this?
Any advice or suggestions would be appreciated; I'm trying to become a better computer user and that means learning better ways of doing things.
Try this:
fromdir=/home
for i in {1..10};do
AA=$(sed 's/ij0=\([0-9]*\)/\1/' "$fromdir/$i/hehd.data")
BB="$i"
for f in "$fromdir/$i/"*;do
CC="${f##*/}"
if [[ "$CC" = "hehd.data" ]]; then
echo cp "$f" "$fromdir/$i/$AA.$BB.$CC"
else
echo mv "$f" "$fromdir/$i/$AA.$BB.$CC"
fi
done
done
It loops over directories using Bash sequence {1..10].
In each directory, with the sed command the ij0 value is assigned to AA variable, the directory name is assigned to BB.
In the file loop, if the file is hehd.data it's copied, else it's renamed with the new name.
You can remove the echo before cp and mv commands if the output meets your needs.

Resources