I want to rsync contents from /local/path to server:/remote/path.
The files end with extensions composed by 4 digits
If a file does not exist in remote path, copy the file to remote and remove from local
If a file exists in remote path and the size is no less than the local one, do not copy the file to remote and remove it from local
I tried
rsync -avmhP --include='*.[0-9][0-9][0-9][0-9]' --include='*/' --exclude='*' --size-only --remove-source-files /local/path server:/remote/path
However, some files existing in the remote path remain in local path.
Another question is, why we need --include='*/' --exclude='*'? Why --include='*.[0-9][0-9][0-9][0-9]' alone doesn't work for the file filtering?
Do you mean --remove-sent-file instead of remove-source-file ?
According to the rsync man page :
--remove-sent-file
This tells rsync to remove from the sending side the files and/or symlinks that are newly created or whose content is updated on the receiving side. Directories and devices are not removed, nor are files/symlinks whose attributes are merely changed.
That's means that only transferred file (the ones whom size changed) are deleted from source. To active the include file, you first need to exclude all the other BUT my include pattern. The 3 arguments you used mean "I excluded all files (--include='*/' --exclude='*') but the ones matching my pattern (--include='*.[0-9]{4}')
From man page :
--include=PATTERN
don’t exclude files matching PATTERN
--exclude=PATTERN
exclude files matching PATTERN
Related
I have files in a directory, with a pattern in the filename ("IMP-"). I need to copy the files from the directory A to the directory B. But I also keep the files in the directory A. So in order to copy only the new files in directory B, I need, first to list each time I do a copy, the filenames in a text file (list.txt), and then to copy only the files that aren't listed in the text file.
exemple
Directory A (/home/ftp/recep/)
files, for example can be :
/home/recep/IMP-avis2018.txt
/home/recep/IMP-avis2018.pdf
/home/recep/IMP-avis2017.jpg
/home/recep/IMP-avis2017.pdf
Directory B (/home/ftp/transfert/)
In need to copy all files with IMP* to directory B (/home/ftp/transfert/).
And when a new file is receive in drectory A, I need this file, and only this file, to be copied in directory B (where files only stay 2 hours max)
I tought maybe I could do something with rsync, but I could'n find an adequate option.
So maybe it could be a bash script.
Actions would be :
have a simple basic text file containing already proceed files (for example liste.txt)
find files in directory A containing pattern IMP
for each of these files, read the liste.txt file and if the file is not listed in liste.txt, copy it to the directory B
You could try the option -n. The man page says:
-n, --no-clobber
do not overwrite an existing file (overrides a previous -i option)
So
cp -n A/* B/
should copy all files from A to B, except those that are already in B.
Another way would be rsync:
rsync -vu A/* B/
This syncs the files from A to B and prints the file that were actually copied.
Kind of easy question, but I can't find the answer. I want to extract the contents of multiple zipped folders into a single directory. I am using the bash console, which is the only tool available on the particular website I am using.
For example, I have two folders: a.zip (which contains a1.txt and a2.txt) and b.zip (which contains b1.txt and b2.txt). I want to get extract all four text files into a single directory.
I have tried
unzip \*.zip -d \newdirectory
But it creates two directories (a and b) with two text files in each.
I also tried concatenating the two zipped folders into one big folder and extracting it, but it still creates two directories, even when I specify a new directory.
I can't figure what I am doing wrong. Any help?
Thanks in advance!
Use the -j parameter to ignore any directory structure.
unzip -j -d /path/to/your/directory '*.zip*'
I've got the following as part of a shell script to copy site files up to a S3 CDN:
for i in "${S3_ASSET_FOLDERS[#]}"; do
s3cmd sync -c /path/to/.s3cfg --recursive --acl-public --no-check-md5 --guess-mime-type --verbose --exclude-from=sync_ignore.txt /path/to/local/${i} s3://my.cdn/path/to/remote/${i}
done
Say S3_ASSET_FOLDERS is:
("one/" "two/")
and say both of those folders contain a file called... "script.js"
and say I've made a change to two/script.js - but not touched one/script.js
running the above command will firstly copy the file from /one/ to the correct location, although I've no idea why it thinks it needs to:
INFO: Sending file
'/path/to/local/one/script.js', please wait...
File
'/path/to/local/one/script.js'
stored as
's3://my.cdn/path/to/remote/one/script.js' (13551
bytes in 0.1 seconds, 168.22 kB/s) [1 of 0]
... and then a remote copy operation for the second folder:
remote copy: two/script.js -> script.js
What's it doing? Why?? Those files aren't even similar. Different modified times, different checksums. No relation.
And I end up with an s3 bucket with two incorrect files in. The file in /two/ that should have been updated, hasn't. And the file in /one/ that shouldn't have changed is now overwritten with the contents of /two/script.js
Clearly I'm doing something bizarrely stupid because I don't see anyone else having the same issue. But I've no idea what??
First of all, try to run it without --no-check-md5 option.
Second, I suggest you to pay attention to directory names, specifically trailing slashes.
s3cmd documentation says:
With directories there is one thing to watch out for – you can either upload the directory and its contents or just the contents. It all depends on how you specify the source.
To upload a directory and keep its name on the remote side specify the source without the trailing slash
On the other hand to upload just the contents, specify the directory it with a trailing slash
I have to use bash scripting to copy files from one folder to another. If the destination folder has a file with the same name but older timestamp, it should not copy. Only newer files should be copied. I could have used cp -u, but I was asked not to use it. Essentially I have to use the test command testing for "ot". Please let me know how could this be done. I believe two for loops one to read the files in the source and one for the destination directories can be used and the the time stamp compared. The problem is that both for loops produce the absolute path names along with the file name. So not sure how to compare them
Thanks
You can profit from the parameter substitution:
for file in "$folder1"/* ; do
filename=${file##*/} # Remove everything to the last slash.
Or, you can change the directory:
cd "$folder1"
for file in * ; do
## you have to use full or relative path to $folder2 here
Hi when I tried to transfer the contents of a folder ( The folder has several subfolders and few files) using MQFTE ftecreatetransfer command, Not only the few files in the folder but also the contents of the subfolder are transferred to destination. The same subfolders are created in destination and the contents are transferred. Is there a way to avoid the files from subfolders being transferred ?
As per this page in the Infocenter:
When a directory is specified as a source file specification, the
contents of the directory are copied. More precisely, all files in the
directory and in all its subdirectories, including hidden files, are
copied.
However, it looks like they anticipated your question because the page recently added this clarification:
For example, to copy the contents of DIR1 to DIR2 only, specify
fteCreateTransfer ... -dd DIR2 DIR1/*
So instead of specifying the folder, add the wild card to the end and you get just the files in the top level of that folder. (Assuming of course that you do not also use the -r option!)