I am attempting to mirror a directory on a remote server using rsync. However, I would like a copy of all newly created files to be stored in a separate directory on the local machine.
For example, if a new file is added on the remote server, I would like it to mirror regularly (for example, to ~/mirror), but save an additional copy of only the new file in another folder, (for example, ~/staging). To be clear, only the new files should appear in staging.
My first approach was to allow rsync to update the timestamps, and then use that to make a copy. However, I would now like to preserve timestamps.
Can anyone provide ideas on a simple approach? I am open to use of additional utilities other than rsync.
You might consider making hardlinks in the extra directory.
ln --force --target-directory=~/staging ~/mirror/*
Edit:
If this is a Linux system, incron will trigger on inotify events and would allow you to make copies of files as they are added to a directory you specify.
Related
I'm following the Scripts with Support Files answer from here https://stackoverflow.com/a/46479538/4771016 which works great but running into a problem during the update of my script.
If not found, my script creates an .env file for the users to pass some variables in the same directory as the .sh file lives: /home/linuxbrew/.linuxbrew/Cellar/myscript/1.0.2/libexec/.env the problem is that upon releasing a new version, the .env file won't be in the new directory i.e. /home/linuxbrew/.linuxbrew/Cellar/myscript/1.0.3/libexec/ and thus will be recreated losing the modifications.
Any ideas for keeping that .env file during updates or an acceptable design pattern for my use case? I was thinking about keeping the .env file outside that directory somewhere, but I don't know the Homebrew directory structure well enough to store it in the right place.
I am using git in cli to change the current branch:
git checkout dev
and it produces:
fatal: cannot create directory at 'app/src/androidTest/java?com': Illegal byte sequence
As answered in this question and this one, I tried:
LC_ALL=C git checkout dev
or
LC_CTYPE=C git checkout dev
but I am getting the same error as shown above.
Running:
git status
shows that some of the files were changed by the checkout, but I am still on the master branch.
How can I remove the file causing the problems or how can I checkout the branch without getting this error?
The locale only affects how things are displayed. If the file name contains a character which isn't allowed by the file system, no amount of locale tweaking can fix that.
I can't think of a way to force a file system to let you create a file which then cannot be used, or a good reason to want to be able to do that.
Probably as a workaround, create a virtualized host with a bare-bones Linux system formatted to permit old-style 8-bit file names (Latin-1 or CP1252 if you can live with the unsavory Windows flavor of that), check out the file there, rename and commit the rename back to git. You still won't be able to check out versions of the source tree from before the rename.
I have found a few files in various github projects that are not compatible with one or another operating system. Files with a ".nul" or ".con" extension are a real pain on windows, for example. It isn't a problem exclusive to git. For example Subversion will abort nastily if it can't restore a file for local naming reasons.
In some cases the file may have been uploaded in error. If that is the case for your own projects it should be possible to use the git tools to list the archive and perform a delete of the file from the archive without actually instancing the file locally.
In other cases perhaps that particular file is not significant, and perhaps can be ignored. Perhaps a test will fail if it is missing?
One trick I have used is to stop the whole folder containing that file from being synced by manually creating the directory path, but for the last element, create an empty file instead of a folder. Of course, now the whole test suite will fail.
When the version control tries to do the checkout, it will simply fail to restore the folder, rather than giving a fatal error.
Of course, that only works if the folder is non-critical, e.g. some test files.
The alternative is to piecemeal check-out all but the problem file, but that can be a tedious sequence of checkouts. But you can use this attack to restore the rest of the folder that you omitted using the above technique. Alternatively, locally drag the files from the zip download if they are non-critical.
This may be asking too much from an already very powerful tool, but is there a chance that lftp mirror can execute a command during the mirroring process (from remote directory to the local machine)?
Specific example: lftp is asked to mirror a remote directory with xml files into a local folder and as soon as each file is downloaded/updated, it converts the file to JSON format using xml2json.
I can think of a solution that relies on monitoring the local copy of the mirrored folder for changes via find and then executing xml2json on the new/updated files, but perhaps there is a simpler way?
You can use xfer:verify and xfer:verify-command settings to run a local command on every transferred file.
I'm trying to insert the mercurial_keyring file with my username and password in the .hgrc file but it doesn't exist in my user directory on windows. I have tortoise hg installed and even checked if it was installed properly on the command prompt yet I still don't have the .hgrc folder.
Can anyone tell me what might be the reason to it?
Thanks
Because it's %USERPROFILE%\mercurial.ini
Mercurial reads configuration data from several files, if they exist.
These files do not exist by default and you will have to create the
appropriate configuration files yourself:
Local configuration is put into the per-repository /.hg/hgrc
file.
Global configuration like the username setting is typically put into:
%USERPROFILE%\mercurial.ini (on Windows)
The .hgrc files are not created automatically when you install Mercurial or TortoiseHg.
You will need to manually create it at the location you need whether that is within the repository's .hg folder or your own C:\Users\username\ folder.
You will probably need to use the command line to create the file as it's not usually possible to create filenames that start with . in Windows Explorer.
https://www.selenic.com/mercurial/hgrc.5.html
I have a client who has both a public website and an intranet. The client wants to have a shared media library between the two websites.
In the past this could be done with Products.Zsyncer or collective.PloneMultiSync2, but both these products are old and don't seem te be actively maintained.
What is the currently advisable way to solve this?
This is probably not exactly what you need, but a partial solution can be the usage of Reflecto.
Files and images should be loaded on the server filesystem (and so: they can be rsynced even if Plone sites are on different server) and to do this you must rely on additional stuff like an FTP or similar.
Copying and bootstrapping a Plone site to a new computer
1) Create a new site in the destination using Plone installer and make sure you can log-in to the site with temporary admin account
2) Copy var/filestorage/Data.fs from the old system to the new system - note that admin password is stored in Data.fs and the password given during the creation of a new site is no longer effective after Data.fs copy
3) Copy blobs from the old system to the new system by copying var/blobstorage/ folder
4) Copy src/ folder from the old system if you have any custom development code there
5) Copy buildout.cfg and other .cfg files
6) Rerun buildout in order to automatically re-download and configure all
7) Python packages needed to run the site
python bootstrap.py to make the buildout use new local Python interpreter
8) Then bin/buildout to regenerate parts/ folder
Copying site data in UNIX environment
Below are example UNIX commands to copy a Plone site data from a computer to another over SCP/SSH connection. The actual username and folder locations depend on your system configuration.
Note: a copy of the Plone site configuration must already exist on the target computer. These instructions are only for copying / back-uping site data.
This operation can be perfomed on a running system - Data.fs is append only file and you will simply lose transactions which happened during the copying of the end of the file.
Copy local to remote
Run this command in your buildout Plone installation.
Copy Data.fs database:
scp -C -o CompressionLevel=9 var/filestorage/Data.fs plone#server.com:/srv/plone/site/var/filestorage
Copy BLOB files using rsync
BLOB files contain file and image data uploaded to your site. Since the actual content of file rarely changes after upload, rsync can synchronize only changed files using -a (archive) flag.
rsync -av --compress-level=9 var/blobstorage plone#server.com:/srv/plone/site/var