rsync rename duplicated files in dest directory - bash

I have implemented a rsync based system to move files from different environments to others.
The problem I'm facing now is that sometimes, there are files with the same name, but different path and content.
I want to make rsync (if possible) rename duplicated files because I need and use --no-relative option.
Duplicated files can occur in two ways:
There was a file with same name in dest directory already.
In the same rsync execution, we are transferring file with same name in a different location. Ex: dir1/file.txt and dir2/file.txt
Adding -b --suffix options, allows me to have at least 1 repetition for the first duplicated file's type mentioned.
A minimum example (for Linux based systems):
mkdir sourceDir1 sourceDir2 sourceDir3 destDir;
echo "1" >> sourceDir1/file.txt;
echo "2" >> sourceDir2/file.txt;
echo "3" >> sourceDir3/file.txt;
rsync --no-relative sourceDir1/file.txt destDir
rsync --no-relative -b --suffix="_old" sourceDir2/file.txt sourceDir3/file.txt destDir
Is there any way to achieve my requirements?

I don't think that you can do it directly with rsync.
Here's a work-around in bash that does some preparation work with find and GNU awk and then calls rsync afterwards.
The idea is to categorize the input files by "copy number" (for example sourceDir1/file.txt would be the copy #1 of file.txt, sourceDir2/file.txt the copy #2 and sourceDir3/file.txt the copy #3) and generate a file per "copy number" containing the list of all the files in that category.
Then, you just have to launch an rsync with --from-file and a customized --suffix per category.
Pros
fast: incomparable to firing one rsync per file.
safe: it won't ever overwrite a file (see the step #3 below).
robust: handles any filename, even with newlines in it.
Cons
the destination directory have to be empty (or else it might overwrite a few files).
the code is a little long (and I made it longer by using a few process substitutions and by splitting the awk call into two).
Here are the steps:
0)   Use a correct shebang for bash in your system.
#!/usr/bin/env bash
1)   Create a directory for storing the temporary files.
tmpdir=$( mktemp -d ) || exit 1
2)   Categorize the input files by "duplicate number", generate the files for rsync --from-file (one per dup category), and get the total number of categories.
read filesCount < <(
find sourceDir* -type f -print0 |
LANG=C gawk -F '/' '
BEGIN {
RS = ORS = "\0"
tmpdir = ARGV[2]
delete ARGV[2]
}
{
id = ++seen[$NF]
if ( ! (id in outFiles) ) {
outFilesCount++
outFiles[id] = tmpdir "/" id
}
print $0 > outFiles[id]
}
END {
printf "%d\n", outFilesCount
}
' - "$tmpdir"
)
3)   Find a unique suffix — generated using a given set of chars — for rsync --suffix => the string shall be appended to it.
note: You can skip this step if you know for sure that there's no existing filename that ends with _old+number.
(( filesCount > 0 )) && IFS='' read -r -d '' suffix < <(
LANG=C gawk -F '/' '
BEGIN {
RS = ORS = "\0"
charsCount = split( ARGV[2], chars)
delete ARGV[2]
for ( i = 1; i <= 255; i++ )
ord[ sprintf( "%c", i ) ] = i
}
{
l0 = length($NF)
l1 = length(suffix)
if ( substr( $NF, l0 - l1, l1) == suffix ) {
n = ord[ substr( $NF, l0 - l1 - 1, 1 ) ]
suffix = chars[ (n + 1) % charsCount ] suffix
}
}
END {
print suffix
}
' "$tmpdir/1" '0/1/2/3/4/5/6/7/8/9/a/b/c/d/e/f'
)
4)   Run the rsync(s).
for (( i = filesCount; i > 0; i-- ))
do
fromFile=$tmpdir/$i
rsync --no-R -b --suffix="_old${i}_$suffix" -0 --files-from="$fromFile" ./ destDir/
done
5)   Clean-up the temporary directory.
rm -rf "$tmpdir"

Guess it's not possible with only rsync. You have to make a list of files first and analyze it to work around dupes. Take a look at this command:
$ rsync --no-implied-dirs --relative --dry-run --verbose sourceDir*/* dst/
sourceDir1/file.txt
sourceDir2/file.txt
sourceDir3/file.txt
sent 167 bytes received 21 bytes 376.00 bytes/sec
total size is 6 speedup is 0.03 (DRY RUN)
Lets use it to create list of source files:
mapfile -t list < <(rsync --no-implied-dirs --relative --dry-run --verbose sourceDir*/* dst/)
Now we can loop through this list with something like this:
declare -A count
for item in "${list[#]}"; {
[[ $item =~ ^sent.*bytes/sec$ ]] && break
[[ $item ]] || break
fname=$(basename $item)
echo "$item dst/$fname${count[$fname]}"
((count[$fname]++))
}
sourceDir1/file.txt dst/file.txt
sourceDir2/file.txt dst/file.txt1
sourceDir3/file.txt dst/file.txt2
Change echo to rsync and that is it.

Related

Paste hundreds of file with specific pattern name in bash/awk/c

I have 500 files and I want to merge them by adding columns.
My first file
3
4
1
5
My second file
7
1
4
2
Output should look like
3 7
4 1
1 4
5 2
But I have 500 files (sum_1.txt, sum_501.txt until sum_249501.txt), so I must have 500 column, so It will be very frustrating to write 500 file names.
Is it possible to do this easier? I try this, but it not makes 500 columns, but instead it makes a lot of rows
#!/bin/bash
file_name="sum"
tmp=$(mktemp) || exit 1
touch ${file_name}_calosc.txt
for first in {1..249501..500}
do
paste -d ${file_name}_calosc.txt ${file_name}_$first.txt >> ${file_name}_calosc.txt
done
Something like this (untested) should work regardless of how many files you have:
awk '
BEGIN {
for (i=1; i<=249501; i+=500) {
ARGV[ARGC++] = "sum_" i
}
}
{ vals[FNR] = (NR==FNR ? "" : vals[FNR] OFS) $0 }
END {
for (i=1; i<=FNR; i++) {
print vals[i]
}
}
'
It'd only fail if the total content of all the files was too big to fit in memory.
Your command says to paste two files together; to paste more files, give more files as arguments to paste.
You can paste a number of files together like
paste sum_{1..249501..500}_calosc.txt > sum_calosc.txt
but if the number of files is too large for paste, or the resulting command line is too long, you may still have to resort to temporary files.
Here's an attempt to paste 25 files at a time, then combine the resulting 20 files in a final big paste.
#!/bin/bash
d=$(mktemp -d -t pastemanyXXXXXXXXXXX) || exit
# Clean up when done
trap 'rm -rf "$d"; exit' ERR EXIT
for ((i=1; i<= 249501; i+=500*25)); do
printf -v dest "paste%06i.txt" "$i"
for ((j=1, k=i; j<=500; j++, k++)); do
printf "sum_%i.txt\n" "$k"
done |
xargs paste >"$d/$dest"
done
paste "$d"/* >sum_calosc.txt
The function of xargs is to combine its inputs into a single command line (or more than one if it would otherwise be too long; but we are specifically trying to avoid that here, because we want to control exactly how many files we pass to paste).

Script to pick random directory in bash

I have a directory full of directories containing exam subjects I would like to work on randomly to simulate the real exam.
They are classified by difficulty level:
0-0, 0-1 .. 1-0, 1-1 .. 2-0, 2-1 ..
I am trying to write a shell script allowing me to pick one subject (directory) randomly based on the parameter I pass when executing the script (0, 1, 2 ..).
I can't quite figure it, here is my progress so far:
ls | find . -name "1$~" | sort -r | head -n 1
What am I missing here?
There's no need for any external commands (ls, find, sort, head) for this at all:
#!/usr/bin/env bash
set -o nullglob # make globs expand to nothing, not themselves, when no matches found
dirs=( "$1"*/ ) # list directories starting with $1 into an array
# Validate that our glob actually had at least one match
(( ${#dirs[#]} )) || { printf 'No directories start with %q at all\n' "$1" >&2; exit 1; }
idx=$(( RANDOM % ${#dirs[#]} )) # pick a random index into our array
echo "${dirs[$idx]}" # and look up what's at that index

Cleanest way to get the highest suffix (or prefix) of a certain file type in a set of directories with bash?

I have a set of data files across a number of directories with format
ls lcp01/output/
> dst000.dat dst001.dat ... dst075.dat nn000.dat nn001.dat ... nn036.dat aa000.dat aa001.dat ... aa040.dat
That is to say, there are a set of directories lcp01 through lcp25 with a collection of different data files in their output folders. I want to know what the highest number dstXXX.dat file is in each directory (in the example shown the result would be 75).
I wrote a script which achieves this, but I'm not satisfied with the final step which feels a bit hacky:
#!/bin/bash
for i in `seq -f "%02g" 1 25`; #specify dir extensions 1 through 25
do
echo " "
echo $i
names=($(ls lcp$i/output | grep dst )) #dir containing dst files
NUMS=()
for j in "${names[#]}";
do
temp="$(echo $j | tr -dc '0-9' && printf " ")" # record suffixes for each dst file
NUMS+=("$((10#$temp))") #force base 10 interpretation of dst suffixes
done
numList="$(echo "${NUMS[*]}" | sort -nr | head -n1)"
echo ${numList:(-3)} #print out the last 3 characters of the sorted list - the largest file suffix
done
The final two steps organise the list of output indices, then I show the last 3 characters of that list which will be my largest file number (providing the file numbers are smaller than 100).
Is there a cleaner way of doing this? Ideally I would like more control over the output format, but mainly it's the step of reading the last 3 characters out. I would like to be able to just output the largest number, which should be the last element of the list but I cannot figure out how.
You could do something like the following:
for d in lc[0-9][0-9]; do find $d -name 'dst*.dat' -print | sort -u | tail -n1; done
Above command will only work if the numbering has the same number of digits (dst001..999.dat), as it is sorted as a string; if that's not the case:
for d in lc[0-9][0-9]; do echo -n $d: ; find $d -name 'dst*.dat' -print | grep -o '[0-9]*.dat' | sort -n | tail -n1; done
using filename expansions
for d in lcp*/output; do
files=( $d/dst*.dat )
file=${files[-1]}
[[ -e $file ]] || continue
file=${file#dst*}
echo ${file%.dat}
done
or with extension option to restrict pattern to numbers
shopt -s extglob
... lcp*([0-9])/output
... $d/dst*([0-9]).dat
...
file=${file##dst*(0)}
...

Shell script: segregate multiple files

I have this in my local directory ~/Report:
Rep_{ReportType}_{Date}_{Seq}.csv
Rep_0001_20150102_0.csv
Rep_0001_20150102_1.csv
Rep_0102_20150102_0.csv
Rep_0503_20150102_0.csv
Rep_0503_20150102_0.csv
Using shell-script,
How do I get multiple files from a local directory with a fixed batch size?
How do I segregate/group the files together by report type (0001 files are grouped together, 0102 grouped together, 0503 grouped together, etc.)
I will generate a sequence file (using forqlift) for EACH group/report type. The output would be Report0001.seq, Report0102.seq, Report0503.seq (3 sequence files). In which I will save to a different directory.
Note: In sequence files, the key is the filename of csv (Rep_0001_20150102.csv), and the value is the content of the file. It is stored as [String, BytesWritable].
This is my code:
1 reportTypes=(0001 0102 8902)
2
3 # collect all files matching expression into an array
4 filesWithDir=(~/Report/Rep_[0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-1].csv)
5
6 # take only the first hundred
7 filesWithDir =( "${filesWithDir[#]:0:100}" )
8
9 # files="${filesWithDir[#]##*/}" #### commented out since forqlift cannot create sequence file without the path/to/file
10 # echo ${files[#]}
11
12 shopt -s nullglob
13
14 # Line 21 is commented out since it has a bug. It collects files in
15 # current directory when it should be filtering the "files array" created
16 # in line 7
17
18
19 for i in ${reportTypes[#]}; do
20 printf -v val '%04d' "$i"
21 # files=("Rep_${val}_"*.csv)
# solution to BUG: (filter files array)
groupFiles=( $( for j in ${filesWithDir[#]} ; do echo $j ; done | grep ${val} ) )
22
23 # Generate sequence file for EACH Report Type
24 forqlift create --file="Report${val}.seq" "${groupFiles[#]}"
25 done
(Note: The sequence file output should be in current directory, not in ~/Report)
It's easy to take only a subset of an array:
# collect all files matching expression into an array
files=( ~/Report/Rep_[0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].csv )
# take only the first hundred
files=( "${files[#]:0:100}" )
The second part is trickier: Bash has associative arrays ("maps"), but the only legal values which can be stored in arrays are strings -- not other arrays -- so you can't store a list of filenames as a value associated with a single entry (without serializing the array to and from a string -- a moderately tricky thing to do safely, since file paths in UNIX can contain any character other than NUL, newlines included).
It's better, then, to just generate the array as you need it.
shopt -s nullglob # allow a glob to expand to zero arguments
for ((i=1; i<=1000; i++)); do
printf -v val '%04d' "$i" # pad digits: 12 -> 0012
files=( "Rep_${val}_"*.csv ) # collect files that match
## emit NUL-separated list of files, if any were found
#(( ${#files[#]} )) && printf '%s\0' "${files[#]}" >"Reports.$val.txt"
# Create a sequence file with forqlift
forqlift create --file="Reports-${val}.seq" "${files[#]}"
done
If you really don't want to do that, then we can put something together that uses namevars for redirection:
#!/bin/bash
# This only works with bash 4.3
re='^REP_([[:digit:]]{4})_[[:digit:]]{8}.csv$'
counter=0
for f in *; do
[[ $f =~ $re ]] || continue # skip files not matching regex
if ((++counter > 100)); then break; fi # stop after 100 files
group=${BASH_REMATCH[1]} # retrieve first regex group
declare -g -a "array${group}" # declare an array
declare -n group_arr="array${group}" # redirect group_arr to that array
group_arr+=( "$f" ) # append to the array
done
for varname in "${!array#}"; do
declare -n group_arr="$varname"
## NUL-delimited form
#printf '%s\0' "${group_arr[#]}" \
# >"collection${varname#array}" # write to files named collection0001, etc.
# forqlift sequence file form
forqlift create --file="Reports-${varname#array}.seq" "${group_arr[#]}"
done
I would move away from shell scripts and start to look towards perl.
#!/usr/bin/env perl
use strict;
use warnings;
my %groups;
while ( my $filename = glob ( "~/Reports/Rep_*.csv" ) ) {
my ( $group, $id ) = ( $filename =~ m,/Rep_(\d{4})_(\d{8})\.csv$, );
next unless $group; #undefined means it didn't match;
#anything past 100 in a group is discarded:
if ( #{$groups{$group}} < 100 ) {
push ( #{$groups{$group}}, $filename );
}
}
foreach my $group ( keys %groups ) {
print "$group contains:\n";
print join ("\n", #{$groups{$group});
}
Another alternative is to clobber some bash commands together with regexp.
See implementation below
# Explanation:
# ls -p = List all files and directories in local directory by path
# grep -v / = ignore subdirectories
# grep "^Rep_\d{4}_\d{8}\.csv$" = Look for files matching your regexp
# tail -100 = get 100 results
for file in $(ls -p | grep -v / | grep "^Rep_\d{4}_\d{8}\.csv$" | tail -100);
do echo $file;
# Use reg exp to extract the desired sequence
re="^Rep_([[:digit:]]{4})_([[:digit:]]{8}).csv$";
if [[ $name =~ $re ]]; then
sequence = ${BASH_REMATCH[1};
# Didn't end up using date, but in case you want it
# date = ${BASH_REMATCH[2]};
# Just in case the sequence file doesn't exist
if [ ! -f "$sequence" ] ; then
touch "$sequence"
fi
# Output/Concat your filename to the sequence file, which you can
# read in later to do whatever administrative tasks you wish to do
# to them
echo "$file" >> "$sequence"
fi
done;

bash for loop with numerated names

I'm currently working on a maths project and just run into a bit of a brick wall with programming in bash.
Currently I have a directory containing 800 texts files, and what I want to do is run a loop to cat the first 80 files (_01 through to _80) into a new file and save elsewhere, then the next 80 (_81 to _160) files etc.
all the files in the directory are listed like so: ath_01, ath_02, ath_03 etc.
Can anyone help?
So far I have:
#!/bin/bash
for file in /dir/*
do
echo ${file}
done
Which just simple lists my file. I know I need to use cat file1 file2 > newfile.txt somehow but it's confusing my with the numerated extension of _01, _02 etc.
Would it help if I changed the name of the file to use something other than an underscore? like ath.01 etc?
Cheers,
Since you know ahead of time how many files you have and how they are numbered, it may be easier to "unroll the loop", so to speak, and use copy-and-paste and a little hand-tweaking to write a script that uses brace expansion.
#!/bin/bash
cat ath_{001..080} > file1.txt
cat ath_{081..160} > file2.txt
cat ath_{161..240} > file3.txt
cat ath_{241..320} > file4.txt
cat ath_{321..400} > file5.txt
cat ath_{401..480} > file6.txt
cat ath_{481..560} > file7.txt
cat ath_{561..640} > file8.txt
cat ath_{641..720} > file9.txt
cat ath_{721..800} > file10.txt
Or else, use nested for-loops and the seq command
N=800
B=80
for n in $( seq 1 $B $N ); do
for i in $( seq $n $((n+B - 1)) ); do
cat ath_$i
done > file$((n/B + 1)).txt
done
The outer loop will iterate n through 1, 81, 161, etc. The inner loop will iterate i over 1 through 80, then 81 through 160, etc. The body of the inner loops just dumps the contents if the ith file to standard output, but the aggregated output of the loop is stored in file 1, then 2, etc.
You could try something like this:
cat "$file" >> "concat_$(( ${file#/dir/ath_} / 80 ))"
with ${file#/dir/ath_} you remove the prefix /dir/ath_ from the filename
$(( / 80 )) you get the suffix divided by 80 (integer division)
Also change the loop to
for file in /dir/ath_*
So you only get the files you need
If you want groups of 80 files, you'd do best to ensure that the names are sortable; that's why leading zeroes were often used. Assuming that you only have one underscore in the file names, and no newlines in the names, then:
SOURCE="/path/to/dir"
TARGET="/path/to/other/directory"
(
cd $SOURCE || exit 1
ls |
sort -t _ -k2,2n |
awk -v target="$TARGET" \
'{ file[n++] = $1
if (n >= 80)
{
printf "cat"
for (i = 0; i < 80; i++)
printf(" %s", file[i]
printf(" >%s/%s.%.2d\n", target, "newfile", ++number)
n = 0
}
END {
if (n > 0)
{
printf "cat"
for (i = 0; i < n; i++)
printf(" %s", file[i]
printf(" >%s/%s.%.2d\n", target, "newfile", ++number)
}
}' |
sh -x
)
The two directories are specified (where the files are and where the summaries should go); the command changes directory to the source directory (where the 800 files are). It lists the names (you could specify a glob pattern if you needed to) and sorts them numerically. The output is fed into awk which generates a shell script on the fly. It collects 80 names at a time, then generates a cat command that will copy those files to a single destination file such as "newfile.01"; tweak the printf() command to suit your own naming/numbering conventions. The shell commands are then passed to a shell for execution.
While testing, replace the sh -x with nothing, or sh -vn or something similar. Only add an active shell when you're sure it will do what you want. Remember, the shell script is in the source directory as it is running.
Superficially, the xargs command would be nice to use; the difficulty is coordinating the output file number. There might be a way to do that with the -n 80 option to group 80 files at a time and some fancy way to generate the invocation number, but I'm not aware of it.
Another option is to use xargs -n to execute a shell script that can deduce the correct output file number by listing what's already in the target directory. This would be cleaner in many ways:
SOURCE="/path/to/dir"
TARGET="/path/to/other/directory"
(
cd $SOURCE || exit 1
ls |
sort -t _ -k2,2n |
xargs -n 80 cpfiles "$TARGET"
)
Where cpfiles looks like:
TARGET="$1"
shift
if [ $# -gt 0 ]
then
old=$(ls -r newfile.?? | sed -n -e 's/newfile\.//p; 1q')
new=$(printf "%.2d" $((old + 1)))
cat "$#" > "$TARGET/newfile. $new
fi
The test for zero arguments avoids trouble with xargs executing the command once with zero arguments. On the whole, I prefer this solution to the one using awk.
Here's a macro for #chepner's first solution, using GNU Make as the templating language:
SHELL := /bin/bash
N = 800
B = 80
fileNums = $(shell seq 1 $$((${N}/${B})) )
files = ${fileNums:%=file%.txt}
all: ${files}
file%.txt : start = $(shell echo $$(( ($*-1)*${B}+1 )) )
file%.txt : end = $(shell echo $$(( $* * ${B} )) )
file%.txt:
cat ath_{${start}..${end}} > $#
To use:
$ make -n all
cat ath_{1..80} > file1.txt
cat ath_{81..160} > file2.txt
cat ath_{161..240} > file3.txt
cat ath_{241..320} > file4.txt
cat ath_{321..400} > file5.txt
cat ath_{401..480} > file6.txt
cat ath_{481..560} > file7.txt
cat ath_{561..640} > file8.txt
cat ath_{641..720} > file9.txt
cat ath_{721..800} > file10.txt

Resources