Incremental deploy from a shell script - bash

I have a project, where I'm forced to use ftp as a means of deploying the files to the live server.
I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents,
deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository.
(and taking care of user uploaded files and folders, and making post-deploy changes, etc)
It's working well, but the project is starting to get big enough to make the deployment process too long.
I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is)
I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files,
and upload each modified file, and delete each removed file.
I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc.
Two problems:
I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here,
if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations.
hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least.
So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

You could use a custom style that does the parsing for you.
hg log --rev rev1:rev2 --style mystyle
Then pipe it to sort -u to get a unique list of files. The file "mystyle" would look like this:
changeset = '{file_mods}{file_adds}\n'
file_mod = '{file_mod}\n'
file_add = '{file_add}\n'
The mods and adds templates are files modified or added. There is a similar file_dels and file_del template for deleted files.
Alternatively, you could use hg status -ma --rev rev1-1:rev2 which adds an M or an A before modified/added files. You need to pass a different revision range, one less than rev1, as it is the status since that "baseline". Deleted files are similar - you need the -d flag and a D is added before each deleted file.

Related

Git smudge and clean using local configuration branch

The local configuration of the project I'm working on involves changing several files in complicated ways that cannot be committed to any submitted branches. To work around this I've committed these local configuration changes to a dedicated local branch config, and have been running this bash script config.sh after starting a new work branch:
#!/bin/bash
# put relevant config files in array
mapfile -t files < <(git diff config develop --name-only)
# overwrite only those files to my working directory
git checkout config -- ${files[#]}
# unstage them so they aren't accidentally committed
git reset HEAD ${files[#]}
echo The following files were successfully overwritten for local configuration:
printf '\t%s\n' "${files[#]}"
Along with another .deconfig script that does the same in reverse. Run directly from the terminal, these scripts have been working fine, but I'd like to streamline the process further using git's clean and smudge filters. So I created a .gitattributes file:
*.* filter=config
and then added this to my .git/config file:
[filter "config"]
smudge = ./config.sh
clean = ./deconfig.sh
However, it just isn't working. If I had to guess it's because git isn't expecting me to run an additional checkout as part of a filter, which itself runs after the checkout command against all files. Most use cases for smudge and clean seem to involve simple find and replace operations, but that approach would be complicated to implement and difficult to maintain given the complexity of changes needed. I could store the configuration files in a static, external directory somewhere, but I'd like to smudge and clean based off the same configuration branch because the local configuration itself frequently evolves and benefits from versioning alongside the rest of the project, and ideally the branch could be used as a baseline for other devs for their local configuration. Git's filter-branch might be a better fit but git's own documentation recommends against using it at all. Is there a way to do this? Is there something wrong with my git configuration? Could the script itself be causing a problem? Any other possible approaches?
Although it is not documented anywhere, you cannot change the state of the working tree with a smudge or clean filter. Git expects to invoke the filter once for each file by piping data into it and reading the data from the standard output. In other words, these filters are intended to be invoked on a per-file basis and process only that file, not by modifying the working tree state.
The best solution to your problem is to avoid keeping a separate branch. Simply keep all of the files, both development and production, in some directory, and use a script to copy the correct one into place. The location of the running config file should be ignored, so the script won't cause Git to show anything as modified. Alternatively, keep a template somewhere, and have the script generate the appropriate one based on the environment. This is good if you have secrets for production that should not be checked in; you can pass them to the script through the environment and have the right values generated.
What you're doing is related to ignoring tracked files, which, as outlined in the Git FAQ, generally can't be done successfully.

Is it possible to sync multiple clients over a central server using just rsync and POSIX shell scripting?

The scenario
I have a file server that acts as a master storage for the files to sync and I have several clients that have a local copy of the master storage. Each client may alter files from the master storage, add new ones or delete existing ones. I would like all of them to stay in sync as good as possible by regularly performing a sync operation, yet the only tool I have available everywhere for that is rsync and I can only run script code on the clients, not on the server.
The problem
rsync doesn't perform a bi-directional sync, so I have to sync from server to client as well as from client to server. This works okay for files that just changed by running two rsync operations but it fails when files have been added or deleted. If I don't use rsync with a delete option, clients cannot ever delete files as the sync from the server to the client restores them. If I use a delete option, then either the sync from server to client runs first and deletes all new files the client has added or the sync from client to server runs first and deletes all new files other clients have added to the server.
The question
Apparently rsync alone cannot handle that situation, since it is only supposted to bring one location in sync with another location. I surely neeed to write some code but I can only rely on POSIX shell scripting, which seems to make achieving my goals impossible. So can it even be done with rsync?
What is required for this scenario are three sync operations and awareness of which files the local client has added/deleted since the last sync. This awareness is essential and establishes a state, which rsync doesn't have, as rsync is stateless; when it runs it knows nothing about previous or future operations. And yes, it can be done with some simple POSIX scripting.
We will assume three variables are set:
metaDir is a directory where the client can persistently store files related to the sync operations; the content itself is not synced.
localDir is the local copy of the files to be synced.
remoteStorage is any valid rsync source/target (can be a mounted directory or an rsync protocol endpoint, with or w/o SSH tunneling).
After every successful sync, we create a file in the meta dir that lists all files in local dir, we need this to track files getting added or deleted in between two syncs. In case no such file exists, we have never ran a successful sync. In that case we just sync all files from remote storage, build such a file, and we are done:
filesAfterLastSync="$metaDir/files_after_last_sync.txt"
if [ ! -f "$metaDir/files_after_last_sync.txt" ]; then
rsync -a "$remoteStorage/" "$localDir"
( cd "$localDir" && find . ) | sed "s/^\.//" | sort > "$filesAfterLastSync"
exit 0
fi
Why ( cd "$localDir" && find . ) | sed "s/^\.//"? Files need to be rooted at $localDir for rsync later on. If a file $localDir/test.txt exists, the generated output file line must be /test.txt and nothing else. Without the cd and an absolute path for the find command, it would contain /..abspath../test.txt and without the sed it would contain ./test.txt. Why the explicit sort call? See further downwards.
If that isn't our initial sync, we should create a temporary directory that auto-deletes itself when the script terminates, no matter which way:
tmpDir=$( mktemp -d )
trap 'rm -rf "$tmpDir"' EXIT
Then we create a file list of all files currently in local dir:
filesForThisSync="$tmpDir/files_for_this_sync.txt"
( cd "$localDir" && find . ) | sed "s/^\.//" | sort > "$filesForThisSync"
Now why is there that sort call? The reason is that I need the file list to be sorted below. Okay, but then why not telling find to sort the list? That's because find does not guarantee to sort the same was as sort does (that is explicitly documented on the man page) and I need exactly the order that sort produces.
Now we need to create two special file lists, one containing all files that were added since last sync and one that contains all files that were deleted since last sync. Doing so is a bit tricky with just POSIX but various possibility exists. Here's one of them:
newFiles="$tmpDir/files_added_since_last_sync.txt"
join -t "" -v 2 "$filesAfterLastSync" "$filesForThisSync" > "$newFiles"
deletedFiles="$tmpDir/files_removed_since_last_sync.txt"
join -t "" -v 1 "$filesAfterLastSync" "$filesForThisSync" > "$deletedFiles"
By setting the delimiter to an empty string, join compares whole lines. Usually the output would contain all lines that exists in both files but we instruct join to only output lines of one of the files that cannot be matched with the lines of the other file. Lines that only exist in the second file must be from files have been added and lines that only exist in the first file file must be from files that have been deleted. And that's why I use sort above as join can only work correctly if the lines were sorted by sort.
Finally we perform three sync operations. First we sync all new files to the remote storage to ensure these are not getting lost when we start working with delete operations:
rsync -aum --files-from="$newFiles" "$localDir/" "$remoteStorage"
What is -aum? -a means archive, which means sync recursive, keep symbolic links, keep file permissions, keep all timestamps, try to keep ownership and group and some other (it's a shortcut for -rlptgoD). -u means update, which means if a file already exists at the destination, only sync if the source file has a newer last modification date. -m means prune empty directories (you can leave it out, if that isn't desired).
Next we sync from remote storage to local with deletion, to get all changes and file deletions performed by other clients, yet we exclude the files that have been deleted locally, as otherwise those would get restored what we don't want:
rsync -aum --delete --exclude-from="$deletedFiles" "$remoteStorage/" "$localDir"
And finally we sync from local to remote storage with deletion, to update files that were changed locally and delete files that were deleted locally.
rsync -aum --delete "$localDir/" "$remoteStorage"
Some people might think that this is too complicated and it can be done with just two syncs. First sync remote to local with deletion and exclude all files that were either added or deleted locally (that way we also only need to produce a single special file, which is even easier to produce). Then sync local to remote with deletion and exclude nothing. Yet this approach is faulty. It requires a third sync to be correct.
Consider this case: Client A created FileX but hasn't synced yet. Client B also creates FileX a bit later and syncs at once. When now client A performs the two syncs above, FileX on remote storage is newer and should replace FileX on client A but that won't happen. The first sync explicitly excludes FileX; it was added to client A and thus must be excluded to not be deleted by the first sync (client A cannot know that FileX was also added and uploaded to remote by client B). And the second one would only upload to remote and exclude FileX as the remote one is newer. After the sync, client A has an outdated FileX, despite the fact, that an updated one existed on remote.
To fix that, a third sync from remote to local without any exclusion is required. So you would also end up with a three sync operations and compared to the three ones I presented above, I think the ones above are always equally fast and sometimes even faster, so I would prefer the ones above, however, the choice is yours. Also if you don't need to support that edge case, you can skip the last sync operation. The problem will then resolve automatically on next sync.
Before the script quits, don't forget to update our file list for the next sync:
( cd "$localDir" && find . ) | sed "s/^\.//" | sort > "$filesAfterLastSync"
Finally, --delete implies --delete-before or --delete-during, depending on your version of rsync. You may prefer another or explicit specified delete operation.

Can "rsync --append" replace files that are larger at destination?

I have an rsync job that moves log files from a web server to an archive. The server rotates its own logs, so I might see a structure like this:
/logs
error.log
error.log.20200420
error.log.20200419
error.log.20200418
I use rsync to sync these log files every few minutes:
rsync --append --size-only /foo/logs/* /mnt/logs/
This command syncs everything with the least amount of processing. And it's important - calculating checksums or writing an entire file every time a few lines are added is a no-go. But it ignores files if there is a larger version on the server instead of replacing them:
man rsync:
--append [...] If a file needs to be transferred and its size on the receiver is the
same or longer than the size on the sender, the file is skipped.
Is there a way to tell rsync to replace files instead in this case? Using --append is important for me and works well for other log files that use unique filenames. Maybe there's a better tool for this?
The service is a packaged application that I can't really edit or configure unfortunately, so changing the file structure or paths isn't an option for me.

Merge two folder (update file no overwrite), command line

I've got two folder: provided and done. At start done is made by copying provided and then i've made some changes in done (implementing some functions). Then comes an update in provided: some functions in existing files are added and there is some new files too
I want to merge theses two folder (provided into done):
new files must by copied
existing files must be updated (as in a git merge, appending only what is new) -- this is the hard part
Is there any existing command (for linux) that can achieve this?
git merge-file current-file base-file other-file
git merge-file incorporates all changes that lead from the base-file to other-file into current-file. The result ordinarily goes into current-file.

Retrieving latest file in a directory from a remote server

I was hoping to crack this myself, but it seems I have fallen at the first hurdle because I can't make head nor tale of other options I've read about.
I wish to access a database file hosted as follows (i.e. the hhsuite_dbs is a folder containing several databases)
http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/pdb70_08Oct15.tgz
Periodically, they update these databases, and so I want to download the lastest version. My plan is to run a bash script via cron, most likely monthly (though I've yet to even tackle the scheduling aspect of the task).
I believe the database is refreshed fortnightly, so if my script runs monthly I can expect there to be a new version. I'll then be running downstream programs that require the database.
My question is then, how do I go about retrieving this (and for a little more finesse I'd perhaps like to be able to check whether the remote file has changed in name or content to avoid a large download if unnecessary)? Is the best approach to query the name of the file, or the file property of date last modified (given that they may change the naming syntax of the file too?). To my naive brain, some kind of globbing of the pdb70 (something I think I can rely on to be in the filename) then pulled down with wget was all I had come up with so far.
EDIT Another confounding issue that has just occurred to me is that the file I want wont necessarily be the newest in the folder (as there are other types of databases there too), but rather, I need the newest version of, in this case, the pdb70 database.
Solutions I've looked at so far have mentioned weex, lftp, curlftpls but all of these seem to suggest logins/passwords for the server which I don't have/need if I was to just download it via the web. I've also seen mention of rsync, but of a cursory read it seems like people are steering clear of it for FTP uses.
Quite a few barriers in your way for this.
My first suggestion is that rather than getting the filename itself, you simply mirror the directory using wget, which should already be installed on your Ubuntu system, and let wget figure out what to download.
base="http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/"
cd /some/place/safe/
wget --mirror -nd "$base"
And new files will be created in the "safe" directory.
But that just gets you your mirror. You're still after is the "newest" file.
Luckily, wget sets the datestamp of files it downloads, if it can. So after mirroring, you might be able to do something like:
newestfile=$(ls -t /some/place/safe/pdb70*gz | head -1)
Note that this fails if ever there are newlines in the filename.
Another possibility might be to check the difference between the current file list and the last one. Something like this:
#!/bin/bash
base="http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/"
cd /some/place/safe/
wget --mirror -nd "$base"
rm index.html* *.gif # remove debris from mirroring an index
ls > /tmp/filelist.txt.$$
if [ -f /tmp/filelist.txt ]; then
echo "Difference since last check:"
diff /tmp/filelist.txt /tmp/filelist.txt.$$
fi
mv /tmp/filelist.txt.$$ /tmp/filelist.txt
You can parse the diff output (man diff for more options) to determine what file has been added.
Of course, with a solution like this, you could run your script every day and hopefully download a new update within a day of it being ready, rather than a fortnight later. Nice thing about --mirror is that it won't download files that are already on-hand.
Oh, and I haven't tested what I've written here. That's one monstrously large file.

Resources