I am writing some little script that assembles backup data into one directory. The directory content will then be uploaded to a cloud service and after that we can remove it. I was wondering how one could utilize APFS' copy-on-write feature with a command like cp in Terminal.
The Finder does a great job. But if I run cp Largefile LargeFileCopy it takes forever to copy the file and also uses the space accordingly.
I found it myself.
On macOS, cp supports the -c option. cp -c Largefile LargeFileCopy will then use the new clonefile(2) library and immediately return without using any additional space on the device.
Related
I want to modify *.ica files (to launch Citrix apps) when they are downloaded (to add a transparent Key Passthrough option for remote desktop), so I settled on using entr to monitor the directory, and call then another script (which invokes sed) to update all ica files.
while true; do
ls *.ica | entr -d ~/Downloads/./transparentKeyPassthrough-CitrixIca.sh
done
However, this only works when there is already an .ica file in the directory. If the directory has no *.ica files when first executed, entr errors with:
entr: No regular files to match
Putting a dummy ica file suffices, in which case the new (real) ica file will be detected by entr, and then acted on.
Is there a better way to do this?
The alternative I can think of is to use entr to watch the whole directory for any changes, and if so, run ls -l *.ica and if the change resulted in a new ica file, and then in turn, run the above script.
It seems inelegant and complicated to nest entr that way, so wanted to know if there is some simple option I am missing.
I am using fswatch to monitor a directory and run a script when video files are copied into that directory:
fswatch -o /Path/To/Directory/Directory | xargs -n 1 sh /Path/To/Script/Script.sh
The problem is that the file is often not completed its copy before the script is actioned. The files are video files of varying size. Small files are OK, larger files are not.
How can I delay the fswatch notification until the file has completed its copy?
First of all, the behaviour of the fswatch "monitors" is OS-specific: when asking question about fswatch you'd better specify the OS you use.
However, there's no way to do that using fswatch alone. A process may open a file for writing and keep it open for an amount of time sufficiently long for the OS to send multiple events. I'm afraid there is nothing fswatch can do about it.
An alternate approach may be using another tool to check whether the modified file is currently open: if it is not, then run your script, otherwise skip it and wait for its next event. Such tools are OS-specific: in OS X and Linux you may use lsof. Beware this approach does not protect you from another process opening that file while your script is running.
I used to use Ubuntu
most of time, the command are same as Ubuntu
But there is a command annoys me.
it is cp (copy command)
because I often type cp -a FOLDER/ target
to copy a file , however , the command will copy all the folder under FOLDER/ but not including FOLDER/ itself.
I know I can get what I want by using cp -a FOLDER target (without the slash in the tail)
it easily to misuse the command.
Is there any way to change the behavior, thanks
The command cp is a program in /bin so you could rename the cp command to another file, such as main_cp.exe and create a batch file (script) of your own called cp, which changes the call from what you type to what is actually required and then calls main_cp.exe.
However, if you're going to be using OSX a lot and perhaps on other people's machines, learning to train yourself to just not type the slash is probably going to be better in the long term.
I would like to have a synchronized copy of one folder with all its subtree.
It should work automatically in this way: whenever I create, modify, or delete stuff from the original folder those changes should be automatically applied to the sync-folder.
Which is the best approach to this task?
BTW: I'm on Ubuntu 12.04
Final goal is to have a separated real-time backup copy, without the use of symlinks or mount.
I used Ubuntu One to synchronize data between my computers, and after a while something went wrong and all my data was lost during a synchronization.
So I thought to add a step further to keep a backup copy of my data:
I keep my data stored on a "folder A"
I need the answer of my current question to create a one-way sync of "folder A" to "folder B" (cron a script with rsync? could be?). I need it to be one-way only from A to B any changes to B must not be applied to A.
The I simply keep synchronized "folder B" with Ubuntu One
In this manner any change in A will be appled to B, which will be detected from U1 and synchronized to the cloud. If anything goes wrong and U1 delete my data on B, I always have them on A.
Inspired by lanzz's comments, another idea could be to run rsync at startup to backup the content of a folder under Ubuntu One, and start Ubuntu One only after rsync is completed.
What do you think about that?
How to know when rsync ends?
You can use inotifywait (with the modify,create,delete,move flags enabled) and rsync.
while inotifywait -r -e modify,create,delete,move /directory; do
rsync -avz /directory /target
done
If you don't have inotifywait on your system, run sudo apt-get install inotify-tools
You need something like this:
https://github.com/axkibe/lsyncd
It is a tool which combines rsync and inotify - the former is a tool that mirrors, with the correct options set, a directory to the last bit. The latter tells the kernel to notify a program of changes to a directory ot file.
It says:
It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes.
But - according to Digital Ocean at https://www.digitalocean.com/community/tutorials/how-to-mirror-local-and-remote-directories-on-a-vps-with-lsyncd - it ought to be in the Ubuntu repository!
I have similar requirements, and this tool, which I have yet to try, seems suitable for the task.
Just simple modification of #silgon answer:
while true; do
inotifywait -r -e modify,create,delete /directory
rsync -avz /directory /target
done
(#silgon version sometimes crashes on Ubuntu 16 if you run it in cron)
Using the cross-platform fswatch and rsync:
fswatch -o /src | xargs -n1 -I{} rsync -a /src /dest
You can take advantage of fschange. It’s a Linux filesystem change notification. The source code is downloadable from the above link, you can compile it yourself. fschange can be used to keep track of file changes by reading data from a proc file (/proc/fschange). When data is written to a file, fschange reports the exact interval that has been modified instead of just saying that the file has been changed.
If you are looking for the more advanced solution, I would suggest checking Resilio Connect.
It is cross-platform, provides extended options for use and monitoring. Since it’s BitTorrent-based, it is faster than any other existing sync tool. It was written on their behalf.
I use this free program to synchronize local files and directories: https://github.com/Fitus/Zaloha.sh. The repository contains a simple demo as well.
The good point: It is a bash shell script (one file only). Not a black box like other programs. Documentation is there as well. Also, with some technical talents, you can "bend" and "integrate" it to create the final solution you like.
I created this simple script that does a backup, I wrote and tested it in Linux, then I copied it in my WebApp WEB-INF/scripts directory so that I could be run via Java Runtime.exec().
#!/bin/bash
JACCISE_FOLDER="/var/jaccise"
rm $JACCISE_FOLDER/jaccisebackup.zip
zip -r jaccisefolder.zip $JACCISE_FOLDER
mysqldump -ujacc -pxxx jacciseweb > jaccisewebdump.sql
zip jaccisebackup.zip jaccisewebdump.sql
zip jaccisebackup.zip jaccisefolder.zip
rm jaccisewebdump.sql
rm jaccisefolder.zip
cp jaccisebackup.zip $JACCISE_FOLDER
But it doesn't. So I tried to copy it from WEB-INF/scripts to my user dir and run it to roubleshoot it. The result is that it comes out with: ": File o directory non esistente" (Means "Unknown file or directory" notice the colon at the beginning). I created another file from scratch, copied and pasted the whole script and it works. I may think that this is related to:
Text encoding
\n\r differences between windows (I use Eclipse on windows to edit everything) and Linux.
How do I solve this deploy problem?
You should check if the file is executable (chmod +x). Then you should check, if your web server allows the execution of external programs. This might be a security problem and it is likely that the web server prevents the execution. Check the logs of the web server. The encoding of the file can be changed with the dos2unix command. In order to debug your script you can add an "set -x" at the beginning, but I think the script does not start at all.