I have a client who has both a public website and an intranet. The client wants to have a shared media library between the two websites.
In the past this could be done with Products.Zsyncer or collective.PloneMultiSync2, but both these products are old and don't seem te be actively maintained.
What is the currently advisable way to solve this?
This is probably not exactly what you need, but a partial solution can be the usage of Reflecto.
Files and images should be loaded on the server filesystem (and so: they can be rsynced even if Plone sites are on different server) and to do this you must rely on additional stuff like an FTP or similar.
Copying and bootstrapping a Plone site to a new computer
1) Create a new site in the destination using Plone installer and make sure you can log-in to the site with temporary admin account
2) Copy var/filestorage/Data.fs from the old system to the new system - note that admin password is stored in Data.fs and the password given during the creation of a new site is no longer effective after Data.fs copy
3) Copy blobs from the old system to the new system by copying var/blobstorage/ folder
4) Copy src/ folder from the old system if you have any custom development code there
5) Copy buildout.cfg and other .cfg files
6) Rerun buildout in order to automatically re-download and configure all
7) Python packages needed to run the site
python bootstrap.py to make the buildout use new local Python interpreter
8) Then bin/buildout to regenerate parts/ folder
Copying site data in UNIX environment
Below are example UNIX commands to copy a Plone site data from a computer to another over SCP/SSH connection. The actual username and folder locations depend on your system configuration.
Note: a copy of the Plone site configuration must already exist on the target computer. These instructions are only for copying / back-uping site data.
This operation can be perfomed on a running system - Data.fs is append only file and you will simply lose transactions which happened during the copying of the end of the file.
Copy local to remote
Run this command in your buildout Plone installation.
Copy Data.fs database:
scp -C -o CompressionLevel=9 var/filestorage/Data.fs plone#server.com:/srv/plone/site/var/filestorage
Copy BLOB files using rsync
BLOB files contain file and image data uploaded to your site. Since the actual content of file rarely changes after upload, rsync can synchronize only changed files using -a (archive) flag.
rsync -av --compress-level=9 var/blobstorage plone#server.com:/srv/plone/site/var
Related
Presently I'm using phpDesigner 8. I like, it can create a "real remote" FTP project, phpDesigner not downloading all project files to local, it shows me the remote tree, and I download only the file, that I'd like to edit. When I save, the file automatically uploaded with FTP.
I'd like to do this with PhpStorm. I tried "New Project from Existing Files", then "Web server is on remote host, files are accessible via FTP/SFTP/FTPS". I turn on Tools/Deployment/Automatic Upload (always), and PhpStorm immediately uploads the saved files via FTP. This good. But I don't need a local copy any of the remote files. I need to download the remote files only to editor, not to locally, and saves to remote.
Is there any way to do this?
Is there any way to do this?
Well ... you cannot create 100% Remote FTP project -- because you still have to create actual project ... which has to be local. It can be completely empty (no actual project files) but it will still contain config files in .idea subfolder (e.g. your FTP details).
But yes -- once such empty project created -- just go to Settings/Preferences and configure Deployment.
Once it's properly set up -- go browse Remote Host and choose to edit file remotely -- IDE will download that file in temp location and upload it back as needed -- https://blog.jetbrains.com/phpstorm/2015/04/remote-edit-in-phpstorm-9-eap/.
NOTE: missing stuff when editing remotely -- https://stackoverflow.com/a/36850634/783119
My remote Server is samba server, which is accessible by both Mac and Windows machine.I created a common folder in Samba server. I want my local folder to be in sync with the common folder on my samba Server. Because when internet connection is lost i am unable to access the files copied to common folder.For that sake i want to sync my local folder with it.
My goals are:
When i remove files from local folder,it should get removed in common folder in samba server
Similarly when i modify or delete files in common folder, it should get reflected in local folder of my machine.
I tried rsync:
rsync --progress -avzC --stats --force Source root#remoteserver:/path
But how do I automate syncing from both sides?
Note: For this i can rely on Dropbox, Boxsync or some cloud share app support. But i want to implement my own functionality, I don't want to rely on Third party API.
I've installed Collabnet Subversion Edge, and would like to make sure I have it backed up properly. I would like NOT to use the CloudBackup service offered.
I've went to the administration interface for collabnet (localhost:3343) and went to Repositories > Backup Schedule. There, one can choose between 3 different 'Type of Job':
Cloud Services Backup
Full Dump Backup
Hotcopy Backup
Neither lets you choose where to copy the backup. I've tried looking up how this works, but documentation seems to be lacking a lot.
What is the best way to backup such a repository? Shall I just keep a copy of the entire collabnet folder (c:\csvn)?
The Subversion Edge admin UI lets you specify the folder for backups. It defaults to a folder inside the normal data folder, but you can specify a different value. So, for example, if you have a D:\ drive that you want the backups to go on you can just specify that folder in the settings and the backups will go to that folder.
It does need to be a physically accessible hard drive though.
See the Backup Directory configuration item in this screenshot:
https://ctf.open.collab.net/sf/projects/svnedge/screenshots/screens/config/config.png
You can use Windows Server Backup to backup Subversion repositories. It allows you to shedule backups
to a network share, dedicated backup volume, writeable media. For example, wbadmin command-line tool allows you to safely backup your repositories. This simple command performs one-time copy backup of C:\foo\bar to X:\ volume:
wbadmin start backup –backupTarget:x: -include:c\foo\bar -vsscopy
(To install Windows Server Backup, run ocsetup WindowsServerBackup in elevated command-prompt).
You can setup backup in different ways:
wbadmin command-line tool,
PowerShell cmdlets, good for automation and customization of backup actions,
Windows Server Backup wizard (control panel, actually) MMC snap-in.
It's not required to stop server's service when you run the backup because FSFS repository backend is always in consistent state.
Here are general tips about recovering Subversion repository from a backup:
Recover repository backup to an empty directory to make sure that restored repository files won't mix with files of the broken one. After repository if recovered, you can delete broken repository and then replace it with the recovered one.
Stop-start cycle your Subversion server after recovering repository from a backup.
If your clients get errors after repository recover, run svnadmin recover against it. The command finishes instantly and makes repository accessible again.
If you have access to the repository directories then you should be able to use hotcopy directly and specify where the backups go.
It's enough to take a periodical backup of just csvn/data directory where all your repositories and configuration files are stored.
Visit this link for backup (and upgrade) options. The contents in the link is added below. Hope it helps.
Manual Upgrade/Reinstallation Steps
Subversion Edge includes an integrated mechanism for installing updates. This is the preferred way to do an upgrade as it handles whatever steps are needed to perform the upgrade and can be done remotely from your web browser. However, there are scenarios where you might want or need to do an upgrade manually, for example your Subversion Edge server might not be able to access the Internet to pull down the updates or maybe one or more critical installation files have become corrupted and you need to reinstall using the same version. Here are the steps for performing a manual upgrade or reinstallation:
Windows
If your existing Subversion Edge installation was installed using the installer from Subversion Edge 2.0.0 or later, then all you need to do to upgrade is download the latest installer and run it. This will uninstall the current version and install the new version (which is how the Windows Installer (.msi) process works for upgrades).
If you are not sure what version you installed with, you can always safely use this approach:
Stop the existing services and uninstall the current version from the
Windows Control Panel. This will leave behind your C:\csvn folder and
any files in it that have been modified since the original install.
Delete everything in the C:\csvn folder EXCEPT the data folder. So
you should be left with just the C:\csvn\data folder.
Install the new version. The installer will pick up the existing data folder and when the services start it will basically just be an upgrade to the new
version.
WARNING: Take note of this reported bug and backup the svn_access_file first:
artf7081 - Using Windows installer for updates can overwrite the svn_access_file
Linux/Solaris
To upgrade a Linux/Solaris installation, this is the safest way to do it:
Stop the servers $ bin/csvn stop $ bin/csvn-httpd stop
Rename the csvn folder $ mv csvn csvn-old
Untar the new release as a non-root user
Move the data folder back into the new release
$ mv csvn-old/data csvn
Important! Copy "dist" configuration files to data folder
$ cp -f csvn/dist/*.dist csvn/data/conf
Start the servers
$ bin/csvn start
$ bin/csvn-httpd start
We have a web app running on a Windows server, which allows a user to do some processing and download the results. The result is a set of files which are dynamically created on the server and zipped into a single file for facilitating the download process.
Everything works fine on Windows, but when users download the file from the web app on a Mac, the contents of the zip file have the execute (chmod +x) permission set (I presume that the same happens on *NIX and Linux machines). This can, of course, be removed by running the 'chmod -x' command, but is there a way by which one can remove the execute permission on the files, so that when downloaded on a Mac, the files don't have the execute permission set by default?
I believe it's not possible - .zip files don't contain permissions, so on a Mac it has to default to "most permissive" (otherwise it's possible that there are applications inside the zip that wouldn't be marked as executable when they need to be).
tars, for instance, do record permissions, but that'd be a bit more difficult to create on a Windows server.
I am attempting to mirror a directory on a remote server using rsync. However, I would like a copy of all newly created files to be stored in a separate directory on the local machine.
For example, if a new file is added on the remote server, I would like it to mirror regularly (for example, to ~/mirror), but save an additional copy of only the new file in another folder, (for example, ~/staging). To be clear, only the new files should appear in staging.
My first approach was to allow rsync to update the timestamps, and then use that to make a copy. However, I would now like to preserve timestamps.
Can anyone provide ideas on a simple approach? I am open to use of additional utilities other than rsync.
You might consider making hardlinks in the extra directory.
ln --force --target-directory=~/staging ~/mirror/*
Edit:
If this is a Linux system, incron will trigger on inotify events and would allow you to make copies of files as they are added to a directory you specify.