Sync a folder with tar without recreating the tar [closed] - bash

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Im trying to write a script that keeps a tar in sync with a folder. I am dealing with a lot of files and don't want to remake the tar every time the script is run. I want it to only add/remove files from the tar that have been added/removed from the folder since the last script run. Here's what I have.
# Create tar if it doesn't exist but don't over write if it does exist
touch -a /home/MyName/data.tar
cd /home/MyName
# Make the tar
tar -uv --exclude='dirToTar/FileIWantToExclude' -f $tarFile dirToTar
This works great for adding files. But if a file is deleted from dirToTar, it doesn't get removed from data.tar.

Unfortunately, tar just doesn't support this. As an alternative, you could use zip, like this:
zip -r -FS myArchiveFile.zip dirToZip
Not "tar" like you asked for, but it does seem to work nicely. Another alternative would be to use 7z (the 7-zip archiver), which may give you better compression. The command-line options for this is obscure, but this works:
7z u -up1q0r2x2y2z1w2 myArchiveFile.7z dirToZip
(I found documentation for these 7z command-line options here: https://www.scottklement.com/p7zip/MANUAL/switches/update.htm. I don't know why it's so hard to find this documentation...).
If, for some reason, you don't want the compression provided by zip or 7z, there are ways to disable that too, so zip or 7z just create a file container kind of like tar does.
In the end, though, I think you should just re-create the archive each time. I suspect that the time saved doing the kind of synchronization you ask for is probably small.

Related

I moved few scripting files from one location to another. I didn't mention names to those files in the destination [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have two scripting files to be moved to a new directory.
with mv command I moved those files but couldn't find them in the new directory.
I did a mistake of not providing the filenames in the destination folder. The files are not present anywhere. How to get my files back?
from user1 I moved files
sudo mv script1 /infinitescripts
sudo mv script2 /infinitescripts
I expected script1 and script2 files to be present in infinitescripts directory. But, the directory is empty and the files are not present in the source as well. I dont know where my files are gone.
If I have a file myfile in folder A (so, A/myfile), and I want to move that file into a folder B inside of folder A (so, into A/B/), I need to use mv myfile B. That will result in there being a file A/B/myfile.
What you ran was the equivalent of mv myfile /B, with that extra / in front. What the extra / does is tell the system to look in the root directory for that folder.
So what you did was accidentally created a folder in the root directory called infinitescripts and moved your file into there. The file should be safe and sound. To find it, you can go to that directory with
cd /infinitescripts.

Cannot recursive copy a hidden directory - UNIX [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm currently trying to recursive copy a hidden directory using this command
cp -r ../openshiftapp/.openshift .
It is not working.. what can be wrong?
On OS X you should use -R rather than -r. The man page (on Snow Leopard 10.6.8) says:
Historic versions of the cp utility had a -r option. This implementation supports that option; however, its use is strongly discouraged, as it does not correctly copy special files, symbolic links, or fifo's.
The recursive option for the cp command would be used on directories, not files. The documentation states:
-R, -r, --recursive
copy directories recursively
The OSX docs have more info, but don't suggest that the option can be used with files. Instead, it still mentions their use for copying directory contents:
-R If source_file designates a directory, cp copies the directory and the entire subtree connected
at that point. If the source_file ends in a /, the contents of the directory are copied rather
than the directory itself. This option also causes symbolic links to be copied, rather than
indirected through, and for cp to create special files rather than copying them as normal files.
Created directories have the same mode as the corresponding source directory, unmodified by the
process' umask.
In -R mode, cp will continue copying even if errors are detected.
Note that cp copies hard-linked files as separate files. If you need to preserve hard links, consider using tar(1), cpio(1), or pax(1) instead.

Copy shell script and preserve permissions [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a small shell script that starts a program when I double-click it. (I have set the permissions to allow executing the script).
I want to be able to copy that script to another computer so that the new user can double-click it without needing to know anything about chmod or permissions. But I can't find out how to preserve the execute permission when I copy the file.
I can usually find answers with Google but this has me defeated - I guess I am not expressing my question properly.
Thanks
Use rsync or tar.
rsync -p file user#host:destdir
plus other options you might need.
Or
tar cvzf file.tar file
then copy (or email, etc.) file.tar to the other machine and extract the file:
tar xpvzf file.tar

Copy files while skipping over files that exist - Unix [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'd like to take a large folder (~100GB) and copy it over to another folder. I'd like it to skip any files that exist (not folders) so if /music/index.html does not exist it would still copy even though the /music directory already exists.
I found this, but my shell is saying -u is not a valid argument.
I don't know how rsync works, so please let me know if that's a better solution.
Thanks.
Always use rsync for copying files, because It Is Great.
To ignore existing files:
rsync --ignore-existing --recursive /src /dst
Do read the manual and search around for many, many great examples. Especially the combination with ssh makes rsync a great tool for slow and unreliable connections on account of its --partial option. Add --verbose to see which files are being copied. Be sure to check out the plethora of options concerning preservation of permissions, users and timestamps, too.
rsync(1) absolutely shines when the source and destination are on two different computers. It is still the better tool to use when the source and destination are on the same computer.
A simple use would look like:
rsync -av /path/to/source /path/to/destination
If you're confident that any files that exist in both locations are identical, then use the --ignore-existing option:
rsync -av --ignore-existing /path/to/source /path/to/destination
Just for completeness, when I use rsync(1) to make a backup on a remote system, the command I most prefer is:
rsync -avz -P /path/to/source hostname:/path/to/destination
The -z asks for compression (I wouldn't bother locally, but over a slower network link it can make a big difference) and the -P asks for --partial and --progress -- which will re-use partial file transfers if it must be restarted, and will show a handy progress bar indicator.

How to copy files across computers using SSH and MAC OS X Terminal [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm trying to copy my .profile, .rvm and .ssh folders/files to a new computer and keep getting a "not a regular file" response. I know how to use the cp and ssh commands but I'm not sure how to use them in order to transfer files from one computer to another.
Any help would be great, thanks!
You can do this with the scp command, which uses the ssh protocol to copy files across machines. It extends the syntax of cp to allow references to other systems:
scp username1#hostname1:/path/to/file username2#hostname2:/path/to/other/file
Copy something from this machine to some other machine:
scp /path/to/local/file username#hostname:/path/to/remote/file
Copy something from another machine to this machine:
scp username#hostname:/path/to/remote/file /path/to/local/file
Copy with a port number specified:
scp -P 1234 username#hostname:/path/to/remote/file /path/to/local/file
First zip or gzip the folders:
Use the following command:
zip -r NameYouWantForZipFile.zip foldertozip/
or
tar -pvczf BackUpDirectory.tar.gz /path/to/directory
for gzip compression use SCP:
scp username#yourserver.com:~/serverpath/public_html ~/Desktop
You may also want to look at rsync if you're doing a lot of files.
If you're going to making a lot of changes and want to keep your directories and files in sync, you may want to use a version control system like Subversion or Git. See http://xoa.petdance.com/How_to:_Keep_your_home_directory_in_Subversion

Resources