Does anyone know of a script to publish a web site directory hierarchy to a web server from a local git repo? I'm trying to avoid re-inventing the wheel.
I was thinking the solution could either be scripted visiting file by file or a directory tree using rsync. If scripted file by file, a configuration file would contain tuples; each file to be published and its permissions. This configuration file may also contain path names for folders to be created as well.
Appreciate your guidance.
The solution was to use rsync and ssh with the chmod command to edit permission after syncing the directories.
Related
Installed/running Ontotext GraphDB v10.1.0 (free desktop windows). All working fine, create repositories, run SPARQL, etc.
The server and UI are both loading/running/reporting repositories in the C:\Users<Username>\AppData\Roaming\Graph\data\repositories folder.
However, when running the ImportRdf.cmd utility, its "attaching to"/creating the repository in C:\Users<Username>\AppData\Local\Graph\data\repositories folder instead!?
Tried adding the correct path into C:\Users<user>\AppData\Local\GraphDB Desktop\app\GraphDB Desktop.cfg but makes no difference.
Anyone experienced this/got any fixes?
The data /repository/ directory can be set through the system or config property graphdb.home.data. The default value is the data subdirectory relative to the GraphDB home directory. For example, one way to configure it: Go in bin folder of graphdb distribution and start graphdb with the following command:
./graphdb -Dgraphdb.home="full path to where you want your repo directory".
I am trying to isolate a problem in a backend logic using log file. I have made a custom log file for the purpose because default log file has too much content to filter through. The module is already live so I have to read the log file from the server to debug the problem. I noticed that while performing commit, the log files I created was in gitignore. So I wanted to know how it works. Are log files generally placed in gitignore? And do servers make their own log files?
Yes, the server will create its own log file. Version control should not work with them since the information they contain will be relative to the environment in which they are contained (your server in this case). Therefore, by default, in the storage/logs directory you will find a .gitignore file with the content:
*
! .gitignore
which will cause none of the files contained in it to be managed by Git.
If your new log file is in this directory, it will not be processed by Git either.
I am trying to update a module to a newer version. In the past I have manually uploaded each file carefully into the new directory and overwritten older files using FTP. However I wanted to use SSH to try and do this more easily and without any file permission problems.
I have:
Uploaded the .tgz file to the root folder (/http) on the server
Logged into the server via SSH
Changed the directory to the correct directory
Run the following command: tar -zxvf fishpig_splash.tgz
In the command line I was then given a list of all the files that had been extracted. However if I use FTP to go to any of these files I can see that they are still the older version and have not been overwritten.
I was expecting that the files would extract into the correct directories and overwrite any that already existed. I have tested the extraction by creating a temporary directory and extracting into that and everything worked fine.
Is there another part to this script I need to use to overwrite the files?
Thanks
Glynn
Sorry this was just me being stupid! When extracting the tar file there was a subfolder within it for the extension, I completely missed it. I just went down a level in the file and zipped up the contents only then extracted them at the root and everything worked fine. Thanks for the help though!
I am having directory issues when using sublime sftp to upload files. Currently am using ftp type, and the issue is this:
My local and remote folders match, but when I take a file from the local side that is say 3 directories deep and modify/upload, it's putting the file to the root folder and not the directory location.
Any ideas are appreciated.
NOt sure if this helps, but
Uploading "/Volumes/--/Dropbox/ItsJustFood/web/wp-content/themes/justfood/library/css/style.css" to "/library/css/style.css" ..... success
looks like it's only going two directories up when it should be going to the root?
Turns out you have to make sure the .json file is also in the root folder of your remote copy.
I am attempting to mirror a directory on a remote server using rsync. However, I would like a copy of all newly created files to be stored in a separate directory on the local machine.
For example, if a new file is added on the remote server, I would like it to mirror regularly (for example, to ~/mirror), but save an additional copy of only the new file in another folder, (for example, ~/staging). To be clear, only the new files should appear in staging.
My first approach was to allow rsync to update the timestamps, and then use that to make a copy. However, I would now like to preserve timestamps.
Can anyone provide ideas on a simple approach? I am open to use of additional utilities other than rsync.
You might consider making hardlinks in the extra directory.
ln --force --target-directory=~/staging ~/mirror/*
Edit:
If this is a Linux system, incron will trigger on inotify events and would allow you to make copies of files as they are added to a directory you specify.