Copy files while skipping over files that exist - Unix [closed] - macos

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'd like to take a large folder (~100GB) and copy it over to another folder. I'd like it to skip any files that exist (not folders) so if /music/index.html does not exist it would still copy even though the /music directory already exists.
I found this, but my shell is saying -u is not a valid argument.
I don't know how rsync works, so please let me know if that's a better solution.
Thanks.

Always use rsync for copying files, because It Is Great.
To ignore existing files:
rsync --ignore-existing --recursive /src /dst
Do read the manual and search around for many, many great examples. Especially the combination with ssh makes rsync a great tool for slow and unreliable connections on account of its --partial option. Add --verbose to see which files are being copied. Be sure to check out the plethora of options concerning preservation of permissions, users and timestamps, too.

rsync(1) absolutely shines when the source and destination are on two different computers. It is still the better tool to use when the source and destination are on the same computer.
A simple use would look like:
rsync -av /path/to/source /path/to/destination
If you're confident that any files that exist in both locations are identical, then use the --ignore-existing option:
rsync -av --ignore-existing /path/to/source /path/to/destination
Just for completeness, when I use rsync(1) to make a backup on a remote system, the command I most prefer is:
rsync -avz -P /path/to/source hostname:/path/to/destination
The -z asks for compression (I wouldn't bother locally, but over a slower network link it can make a big difference) and the -P asks for --partial and --progress -- which will re-use partial file transfers if it must be restarted, and will show a handy progress bar indicator.

Related

Cannot recursive copy a hidden directory - UNIX [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm currently trying to recursive copy a hidden directory using this command
cp -r ../openshiftapp/.openshift .
It is not working.. what can be wrong?
On OS X you should use -R rather than -r. The man page (on Snow Leopard 10.6.8) says:
Historic versions of the cp utility had a -r option. This implementation supports that option; however, its use is strongly discouraged, as it does not correctly copy special files, symbolic links, or fifo's.
The recursive option for the cp command would be used on directories, not files. The documentation states:
-R, -r, --recursive
copy directories recursively
The OSX docs have more info, but don't suggest that the option can be used with files. Instead, it still mentions their use for copying directory contents:
-R If source_file designates a directory, cp copies the directory and the entire subtree connected
at that point. If the source_file ends in a /, the contents of the directory are copied rather
than the directory itself. This option also causes symbolic links to be copied, rather than
indirected through, and for cp to create special files rather than copying them as normal files.
Created directories have the same mode as the corresponding source directory, unmodified by the
process' umask.
In -R mode, cp will continue copying even if errors are detected.
Note that cp copies hard-linked files as separate files. If you need to preserve hard links, consider using tar(1), cpio(1), or pax(1) instead.

Sync a folder with tar without recreating the tar [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Im trying to write a script that keeps a tar in sync with a folder. I am dealing with a lot of files and don't want to remake the tar every time the script is run. I want it to only add/remove files from the tar that have been added/removed from the folder since the last script run. Here's what I have.
# Create tar if it doesn't exist but don't over write if it does exist
touch -a /home/MyName/data.tar
cd /home/MyName
# Make the tar
tar -uv --exclude='dirToTar/FileIWantToExclude' -f $tarFile dirToTar
This works great for adding files. But if a file is deleted from dirToTar, it doesn't get removed from data.tar.
Unfortunately, tar just doesn't support this. As an alternative, you could use zip, like this:
zip -r -FS myArchiveFile.zip dirToZip
Not "tar" like you asked for, but it does seem to work nicely. Another alternative would be to use 7z (the 7-zip archiver), which may give you better compression. The command-line options for this is obscure, but this works:
7z u -up1q0r2x2y2z1w2 myArchiveFile.7z dirToZip
(I found documentation for these 7z command-line options here: https://www.scottklement.com/p7zip/MANUAL/switches/update.htm. I don't know why it's so hard to find this documentation...).
If, for some reason, you don't want the compression provided by zip or 7z, there are ways to disable that too, so zip or 7z just create a file container kind of like tar does.
In the end, though, I think you should just re-create the archive each time. I suspect that the time saved doing the kind of synchronization you ask for is probably small.

shell script for comparing files between two servers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Hey I'm looking for Shell script to transfer compressed archives from server a to server b. Only the compressed archives which have not been transferred should be transfer from server a to server b.
Please don't say scp or rsync because it will copy all the files from server a to server b.
I want the script which compare the existence of the file in the server b. If the file does not exist in server b then it has to transer that file from server a to server b.
As Oli points out - this is exactly what rsync does.... But if you want to go the manual way thentake at my answer here rsync to backup one file generated in dynamic folders
What you could also do for the comparison is ssh first to host a by running command and storing its output locally
ssh localhost "find /var/tmp/ -name \* -exec du -sm {} \;" > /tmp/out.txt
head /tmp/out.txt
531 /var/tmp/
0 /var/tmp/aaa
1 /var/tmp/debian
You now have a file locally with remote filenames,sizes feel free to expand as required

How to copy files across computers using SSH and MAC OS X Terminal [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm trying to copy my .profile, .rvm and .ssh folders/files to a new computer and keep getting a "not a regular file" response. I know how to use the cp and ssh commands but I'm not sure how to use them in order to transfer files from one computer to another.
Any help would be great, thanks!
You can do this with the scp command, which uses the ssh protocol to copy files across machines. It extends the syntax of cp to allow references to other systems:
scp username1#hostname1:/path/to/file username2#hostname2:/path/to/other/file
Copy something from this machine to some other machine:
scp /path/to/local/file username#hostname:/path/to/remote/file
Copy something from another machine to this machine:
scp username#hostname:/path/to/remote/file /path/to/local/file
Copy with a port number specified:
scp -P 1234 username#hostname:/path/to/remote/file /path/to/local/file
First zip or gzip the folders:
Use the following command:
zip -r NameYouWantForZipFile.zip foldertozip/
or
tar -pvczf BackUpDirectory.tar.gz /path/to/directory
for gzip compression use SCP:
scp username#yourserver.com:~/serverpath/public_html ~/Desktop
You may also want to look at rsync if you're doing a lot of files.
If you're going to making a lot of changes and want to keep your directories and files in sync, you may want to use a version control system like Subversion or Git. See http://xoa.petdance.com/How_to:_Keep_your_home_directory_in_Subversion

bash scripting..copying files without overwriting [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 14 years ago.
Improve this question
I would like to know if it is possible to copy/move files to a destination based on the origin name.
Basically, I have a /mail folder, which has several subfolders such as cur and new etc. I then have an extracted backup in /mail/home/username that is a duplicate. mv -f will not work, as I do not have permission to overwrite the directories, but only the files within.
I get errors such as mv: cannot overwrite directory `/home/username/mail/username.com'
What I want to do is for each file in the directory username.com, move it to the folder of the same name in /mail. There could be any number of folders in place of username.com, with seperate sub sirectories of their own.
What is the best way to do this?
I have to do it this way as due to circumstances I only have access to my host with ftp and bash via php.
edit: clarification
I think I need to clarify what happened. I am on a shared host, and apparently do not have write access to the directories themselves. At least the main ones such as mail and public_html. I made a backup of ~/mail with tar, but when trying to extract it extracted to ~/mail/home/mail etc, as I forgot about the full path. Now, I cannot simply untar because the path is wrong, and I cannot mv -f because I only have write access to files, not directories.
For copying, you should consider using cpio in 'pass' mode (-p):
cd /mail; find . -type f | cpio -pvdmB /home/username/mail
The -v is for verbose; -d creates directories as necessary; -m preserves the modification times on the files; -B means use a larger block size, and may be irrelevant here (it used to make a difference when messing with tape devices). Omitted from this list is the -u flag that does unconditional copying, overwriting pre-existing files in target area. The cd command ensures that the path names are correct; if you just did:
find /mail -type f | cpio -pvdmB /home/username
you would achieve the same result, but only by coincidence - because the sub-directory under /home/username was the same as the absolute pathname of the original. If you needed to do:
find /var/spool/mail -type f | cpio -pvdmB /home/username/mail
then the copied files would be found under /home/username/mail/var/spool/mail, which is unlikely to be what you had in mind.
You can achieve a similar effect with (GNU) tar:
(cd /mail; tar -cf - . ) | (cd /home/username/mail; tar -xf - )
This copies directories, not just files. To do that, you need GNU-only facilities:
(cd /mail; find . -type f | tar -cf - -F - ) | (cd /home/username/mail; tar -xf - )
The first solo dash means 'write to stdout'; the second means 'read from stdin'; the '-F' option means 'read the file names to copy from the named file'.
I'm not entirely clear on what it is that you want to do, but you could try the following:
for file in /mail/*; do
mv -f $file /home/username/mail/$(basename $file)
done
This will move every file and subdirectory in /mail from there into /home/username/mail.
Is using tar an option? You could tar up the directory, and extract it under /mail/ (for I am assuming that is what you want roughly) with tar overwriting existing files and directories.
I'm a bit confused about what it is exactly that you want to do. But you should be able to use the approach of Adam's solution and redirect the errors to a file.
for file in /mail/*; do
mv -f $file /home/username/mail/$(basename $file) 2> /tmp/mailbackup.username.errors
done
DIrectories will not be overwritten and you can check the file so that it only contaions errors you anticipate.
Can you untar it again? The -P option to tar will not strip leading "/", so the absolute pathnames will be respected. From your edit, it sounds like this'll fix it.
Even with your clarification I'm still having a problem understanding exactly what you're doing. However, any chance you can use rsync? The src and dest hosts can be the same host for rsync. As I recall, you can tell rsync to only update files that already exist in the destination area (--existing) and also to ignore directory changes (--omit-dir-times).
Again, I'm not quite understanding your needs here, but rsync is very flexible in backing up files and directories.
Good luck.

Resources