Many developers maintain a dotfile repository, keeping all their configurations in a single space that can be easily synced among different machines. What I haven't seen so far is people maintaining their crontabs in the dotfile repository.
Regardless if you find the idea itself useful or not, what would be a convenient way to manage one's crontab in the dotfile repository as well? In my dotfiles repository, the management of the symlinks is handled by GNU Stow and a simple Makefile that wraps the stow-commands like so:
all:
stow --verbose --target=$$HOME --restow */
delete:
stow --verbose --target=$$HOME --delete */
GNU Stow obviously will not work for crontabs. How could I integrate the rollout, update and removal of a crontab in this setup?
crontab has a usage of the form:
crontab [-u user] file
which will install a new crontab from some named file. You can place this step in your Makefile and install a file committed to the repository.
Related
The local configuration of the project I'm working on involves changing several files in complicated ways that cannot be committed to any submitted branches. To work around this I've committed these local configuration changes to a dedicated local branch config, and have been running this bash script config.sh after starting a new work branch:
#!/bin/bash
# put relevant config files in array
mapfile -t files < <(git diff config develop --name-only)
# overwrite only those files to my working directory
git checkout config -- ${files[#]}
# unstage them so they aren't accidentally committed
git reset HEAD ${files[#]}
echo The following files were successfully overwritten for local configuration:
printf '\t%s\n' "${files[#]}"
Along with another .deconfig script that does the same in reverse. Run directly from the terminal, these scripts have been working fine, but I'd like to streamline the process further using git's clean and smudge filters. So I created a .gitattributes file:
*.* filter=config
and then added this to my .git/config file:
[filter "config"]
smudge = ./config.sh
clean = ./deconfig.sh
However, it just isn't working. If I had to guess it's because git isn't expecting me to run an additional checkout as part of a filter, which itself runs after the checkout command against all files. Most use cases for smudge and clean seem to involve simple find and replace operations, but that approach would be complicated to implement and difficult to maintain given the complexity of changes needed. I could store the configuration files in a static, external directory somewhere, but I'd like to smudge and clean based off the same configuration branch because the local configuration itself frequently evolves and benefits from versioning alongside the rest of the project, and ideally the branch could be used as a baseline for other devs for their local configuration. Git's filter-branch might be a better fit but git's own documentation recommends against using it at all. Is there a way to do this? Is there something wrong with my git configuration? Could the script itself be causing a problem? Any other possible approaches?
Although it is not documented anywhere, you cannot change the state of the working tree with a smudge or clean filter. Git expects to invoke the filter once for each file by piping data into it and reading the data from the standard output. In other words, these filters are intended to be invoked on a per-file basis and process only that file, not by modifying the working tree state.
The best solution to your problem is to avoid keeping a separate branch. Simply keep all of the files, both development and production, in some directory, and use a script to copy the correct one into place. The location of the running config file should be ignored, so the script won't cause Git to show anything as modified. Alternatively, keep a template somewhere, and have the script generate the appropriate one based on the environment. This is good if you have secrets for production that should not be checked in; you can pass them to the script through the environment and have the right values generated.
What you're doing is related to ignoring tracked files, which, as outlined in the Git FAQ, generally can't be done successfully.
I have a number of scripts that I use almost everyday in my work. I develop and maintain these on my personal laptop. I have a local git repository where I track the changes, and I have a repository on github to which I push my changes.
I do a lot of my work on a remote supercomputer, and I use my scripts there a lot. I would like to keep my remote /home/bin updated with my maintained scripts, but without cluttering the system with my repository.
My current solution does not feel ideal. I have added the following code belowto my .bashrc. Whenever I log in, my repository will be deleted, and I then clone my project from github. Then I copy the script files I want to my bin, and make them executable.
This sort of works, but it does not feel like an elegant solution. I would like to simply download the script files directly, without bothering with the git repository. I never edit my script files from the remote computer anyway, so I just want to get the files from github.
I was thinking that perhaps wget could work, but it did not feel very robust to include the urls to the raw file page at github; if I rename the file I suppose I have to update the code as well. At least my current solution is robust (as long as the github link does not change).
Code in my .bashrc:
REPDIR=mydir
if [ -d $REPDIR ]; then
rm -rf $REPDIR
echo "Old repository removed."
fi
cd $HOME
git clone https://github.com/user/myproject
cp $REPDIR/*.py $REPDIR/*.sh /home/user/bin/
chmod +x /home/user/bin/*
Based on Kent's solution, I have defined a function that updates my scripts. To avoid any issues with symlinks, I just unlink everything and relink. that might just be my paranoia, though....
function updatescripts() {
DIR=/home/user/scripts
CURR_DIR=$PWD
cd $DIR
git pull origin master
cd $CURR_DIR
for file in $DIR/*.py $DIR/*.sh; do
if [ -L $HOME/bin/$(basename $file) ]; then
unlink $HOME/bin/$(basename $file)
fi
ln -s $file $HOME/bin/$(basename $file)
done
}
on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.
For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.
I am writing a posix compliment shell script that will, amongst other things, clone a git repository and then execute a script (that was cloned along with the repository) inside the repository.
For example:
git clone git#github.com:torvalds/linux.git
cd linux
./Kconfig
The idea would be that people would use it for good, not evil, but you know.... So really I would like stop people from doing putting a line like:
rm -rf /
Inside the script.
Or perhaps something slightly less evil like:
rm -rf ../../
Is it possible for me to somehow change the permissions of the script (after the clone) so that it is only able to modify things inside the cloned repository?
Basically the answer for your question is the chroot command, which allows you to lock in processes in a directory as if it was the root directory. chroot requires root privileges to setup, but there are alternative implementations such has schroot, fakechroot, or proot that don't. Because all file system access (also read) is restricted, you will need to hand in anything that the scripts need to function into the chrooted environment. How to do that conveniently depends on your distribution.
That doesn't necessarily mean it is perfectly secure, because it provides only file system isolation.
I would like to have a synchronized copy of one folder with all its subtree.
It should work automatically in this way: whenever I create, modify, or delete stuff from the original folder those changes should be automatically applied to the sync-folder.
Which is the best approach to this task?
BTW: I'm on Ubuntu 12.04
Final goal is to have a separated real-time backup copy, without the use of symlinks or mount.
I used Ubuntu One to synchronize data between my computers, and after a while something went wrong and all my data was lost during a synchronization.
So I thought to add a step further to keep a backup copy of my data:
I keep my data stored on a "folder A"
I need the answer of my current question to create a one-way sync of "folder A" to "folder B" (cron a script with rsync? could be?). I need it to be one-way only from A to B any changes to B must not be applied to A.
The I simply keep synchronized "folder B" with Ubuntu One
In this manner any change in A will be appled to B, which will be detected from U1 and synchronized to the cloud. If anything goes wrong and U1 delete my data on B, I always have them on A.
Inspired by lanzz's comments, another idea could be to run rsync at startup to backup the content of a folder under Ubuntu One, and start Ubuntu One only after rsync is completed.
What do you think about that?
How to know when rsync ends?
You can use inotifywait (with the modify,create,delete,move flags enabled) and rsync.
while inotifywait -r -e modify,create,delete,move /directory; do
rsync -avz /directory /target
done
If you don't have inotifywait on your system, run sudo apt-get install inotify-tools
You need something like this:
https://github.com/axkibe/lsyncd
It is a tool which combines rsync and inotify - the former is a tool that mirrors, with the correct options set, a directory to the last bit. The latter tells the kernel to notify a program of changes to a directory ot file.
It says:
It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes.
But - according to Digital Ocean at https://www.digitalocean.com/community/tutorials/how-to-mirror-local-and-remote-directories-on-a-vps-with-lsyncd - it ought to be in the Ubuntu repository!
I have similar requirements, and this tool, which I have yet to try, seems suitable for the task.
Just simple modification of #silgon answer:
while true; do
inotifywait -r -e modify,create,delete /directory
rsync -avz /directory /target
done
(#silgon version sometimes crashes on Ubuntu 16 if you run it in cron)
Using the cross-platform fswatch and rsync:
fswatch -o /src | xargs -n1 -I{} rsync -a /src /dest
You can take advantage of fschange. It’s a Linux filesystem change notification. The source code is downloadable from the above link, you can compile it yourself. fschange can be used to keep track of file changes by reading data from a proc file (/proc/fschange). When data is written to a file, fschange reports the exact interval that has been modified instead of just saying that the file has been changed.
If you are looking for the more advanced solution, I would suggest checking Resilio Connect.
It is cross-platform, provides extended options for use and monitoring. Since it’s BitTorrent-based, it is faster than any other existing sync tool. It was written on their behalf.
I use this free program to synchronize local files and directories: https://github.com/Fitus/Zaloha.sh. The repository contains a simple demo as well.
The good point: It is a bash shell script (one file only). Not a black box like other programs. Documentation is there as well. Also, with some technical talents, you can "bend" and "integrate" it to create the final solution you like.
I want to write a bash script that will store 10 back ups of a website in SVN, with it being back up nightly and then have the oldest back up deleted.
Is there an SVN command where I can get the age of these files in svn so then I can grammatically call "svn delete" on that file?
Subversion is definitely not the tool for this job. Once you commit something to subversion, there is no practical way to delete it.
There are a lot of ways to achieve your goal using standard commands in bash. You can use tools like ftp, wget, curl, scp, ssh, or whatever to download your site files, then tar and zip them up with different file names based on the date.
#!/bin/bash
DELETEME='htdocs_'`date '+%Y%m%d' -d '-10 days'`'.tar.gz'
NEW='htdocs_'`date '+%Y%m%d'`'.tar.gz'
SOURCE='/path/on/server/to/backup'
HOST='IP_or_hostname'
USER='user_on_HOST'
ssh $USER#$HOST tar czvf - $SOURCE > $NEW
rm -v $DELETEME
Then just schedule this as a daily cron job.
It doesn't sound like you understand how Subversion works.
Subversion is a version control system. You really use it the other way around, you write your webpages and JavaScripts in Subversion and then deploy your webpage from Subversion to your website. You have a complete history of all of your files in Subversion, and use its features like creating a tag to mark specific revisions of your website. This way, you can find out who made changes and why they were made.
It sounds like you simply want to make a backup of your website, and then delete the oldest backup to save room.
You should look into rsync which is really great for backups. Rsync is fast and is pretty simple to use.
You can look at the Subversion online manual and read the first two or three chapters. It'll explain how Subversion is used and it's one of the best manuals for open source software out there. After you read it, you might decide to use Subversion after all, but not for backups, but for development.