cron.sh not working for Magento cron job - shell

I have been having problems with Magento and cron jobs not running. It seems that certain parameters with cron.sh are not allowed by my hosting company (ps being one of them) therefore the shell script failed before the cron job was run. As my cron in cpanel declares the full path I am wondering if I can remove certain lines from cron.sh eg.
#!/bin/sh
# location of the php binary
if [ ! "$1" = "" ] ; then
CRONSCRIPT=$1
else
CRONSCRIPT=cron.php
fi
MODE=""
if [ ! "$2" = "" ] ; then
MODE=" $2"
fi
PHP_BIN=`which php`
# absolute path to magento installation
INSTALLDIR=`echo $0 | sed 's/cron\.sh//g'`
# prepend the intallation path if not given an absolute path
# if [ "$INSTALLDIR" != "" -a "`expr index $CRONSCRIPT /`" != "1" ];then
# if ! ps auxwww | grep "$INSTALLDIR$CRONSCRIPT$MODE" | grep -v grep 1>/dev/null 2>/dev/null ; then
# $PHP_BIN $INSTALLDIR$CRONSCRIPT$MODE &
# fi
#else
# if ! ps auxwww | grep "$CRONSCRIPT$MODE" | grep -v grep | grep -v cron.sh 1>/dev/null 2>/dev/null ; then
$PHP_BIN $CRONSCRIPT$MODE &
# fi
#fi
Does anyone know if this will work and are there any drawbacks/consequences?

Without having particular knowledge of this functionality - it looks like it could be potentially trying to avoid running the cron script again while it's already running. Perhaps the same could be done with a lock file - but this is one area of Magento I wouldn't muck around with without a lot of research.
This is orthogonal to a larger issue, however. Magento is more picky with hosting than the average PHP codebase, and this is probably just the beginning of issues you will have with your host. I strongly recommend considering a host that is very familiar with Magentos needs. If commenting out chunks of Magento core code becomes the norm - you will run into many more issues down the line.

Related

Ampersand in bash script not working

Using the Ampersand (&) to place it in the background. But in this script for some reason it doesnt work. My programming skills are not great, so please remember im a noob trying to get stuff working.
#!/bin/bash
# Date in format used by filenaming
date=$(date '+%Y%m%d')
# Location where the patch files should be downloaded
patches=~/lists/patches
# Location of the full list
blacklist=~/lists/list
while :
do
# Fetching last download date from downloaded patches
ldd=$(cd $patches && printf '%s\n' * | sed "s/[^0-9]*//g"); echo $ldd
if [ "$ldd" = "" ]
then
break
else
if [ "$ldd" = "$date" ]
then
break
else
ndd=$(date +%Y%m%d -d "${ldd}+1 days")
# Cant have multiple patches in $patches directory, otherwise script wont work
rm -rf $patches/*
sleep 1
file=$patches/changes-$ndd.diff.gz
curl -s -o "$file" "http://url.com/directory/name-$ndd.diff.gz" &
sleep 1
done=$(jobs -l | grep curl | wc -l)
until [ "$done" == 1 ]
do
echo "still here"
done
gunzip "$file"
# Apply patch directory to list's file directories
cat $(echo "$file" | sed "s/.gz//g") | sed 's/.\/yesterday//' | sed 's/.\/today//' > $patches/$ndd.diff
rm $(echo $file | sed "s/.gz//g")
cd $blacklist
patch -p1 --batch -r /root/fail.patch < $patches/$ndd.diff
rm /root/fail.patch
fi
fi
done
What i want to do is let the script wait for each command until the one before is finished. As you can see i used 'sleep' sometimes but i know that isnt a solution. I also read about the wait command, but then you have to place a command in the background using the Ampersand. And thats the problem. For some reason this script doesnt recognize the ampersand at the end of my curl command. I also tried wget, same results. Who can point me in the right direction?
It would never change done after first check. So you need to check every iteration, that's why you should test for command, not for variable
And while will be better, because you need to check before entering
while [ "$(jobs -l | grep curl | wc -l)" -ne 0 ]; do
echo "Still there"
sleep 1
done
I've added sleep because otherwise it wold just flood your console.

deb package fails while postinst bash script executing

Im making a bash script for nagios custom plugins & configuration
i use equivs for its simplicity
here'is my control file.
in Files: section , i tell files to copy themselves to their right path.
Files: check_cpu_loadx /usr/lib/nagios/plugins
check_ipmi_sensors /usr/lib/nagios/plugins
check_libreoffice_count /usr/lib/nagios/plugins
check_ram_per_user /usr/lib/nagios/plugins
check_ram_usage2 /usr/lib/nagios/plugins
check_ram_usage_percentage /usr/lib/nagios/plugins
check_tcptraffic /usr/lib/nagios/plugins
nrpe_custom.cfg /etc/nagios
in the postinst section , it's a bash script that is used for post-install
File: postinst
#!/bin/bash -e
set -x
echo 'configuring nrpe.conf file.'
mv /etc/nagios/nrpe.cfg /etc/nagios/nrpe.original.backup
mv /etc/nagios/nrpe_custom.cfg /etc/nagios/nrpe.cfg
chmod -R +x /usr/lib/nagios/plugins
echo 'Installing tcp-ip addon..'
FLAG=0
Interfaces=`ifconfig -a | grep -o -e "[a-z][a-z]*[0-9]*[ ]*Link" | perl -pe "s|^([a-z]*[0-9]*)[ ]*Link|\1|"`
for Interface in $Interfaces; do
INET=`ifconfig $Interface | grep -o -e "inet addr:[^ ]*" | grep -o -e "[^:]*$"`
MASK=`ifconfig $Interface | grep -o -e "Mask:[^ ]*" | grep -o -e "[^:]*$"`
STATUS="up"
#loopback
if [ "$Interface" == "lo" ]; then
continue
fi
#if eth is down
if [ -z "$INET" ]; then
continue
fi
#if eth ip not starts with 10. or 192.
if [[ "$INET" == 10.* ]]
then
ActiveEth=$Interface;
break
elif [[ "$INET" == 192.* ]]
then
ActiveEth=$Interface;
break
else
echo "Ethernet Selection Failed!Configure nrpe.cfg manually.Change tcp_traffic plugin paramethers according to your current ethernet.";
FLAG=1
break
fi
done
if [[ "$FLAG" == 0 ]]
then
echo 'Selected Ethernet :'$ActiveEth
sed -i -e "s/eth0/$ActiveEth/g" /etc/nagios/nrpe.cfg
fi
echo 'nrpe.conf changed.'
echo 'Nagios-nrpe-server restarting.'
service nagios-nrpe-server restart
echo 'IPMI modules are loading.'
modprobe ipmi_devintf
modprobe ipmi_msghandler
echo "IPMI modules are added to startup."
#echo "ipmi_si" >> /etc/modules
echo "ipmi_devintf" >> /etc/modules
echo "ipmi_msghandler" >> /etc/modules
the problem here , when i compile it to deb package i got "subprocess installed post-installation script return error exit status 1"
then i added set -x for debugging.the problem is for configuring tcp-ip addon , there are some machines that have more then one ethernet card.So i need to choose with the one that has a ip that starts with 10.* or 192.*
in the second section , there is a line
INET=ifconfig $Interface | grep -o -e "inet addr:[^ ]*" | grep -o -e "[^:]*$"
when a ethernet device has no ip , grep returns null and INET variable becomes null , that why the process exit status is 1.
after that line , when i enter "$?" , it says 1
so the problem here is , when i run dpkg -i to install that package , bash script quits after it sees that INET becomes null ..
any help would be appreciated. Im new to this bash thing.
if you want to make sure that a bash command always succeeds, even if the last program gives a non-null return value, just add a "very last" command that will succeed.
something like
INET=$(/sbin/ifconfig eth0 | grep -o -e "inet addr:[^ ]*" | grep -o -e "[^:]*$" || true)
here we call true (a small program that always succeeds), whenever grep fails (|| means OR and is a way to chain programs depending on the exit state of the previous one)
however, your script has a number of flaws:
your grep expression "inet addr:" will only give correct results in an english locale; e.g. when running in a german environment (LANG=de) you could get strings like inet Adresse: 192.168.7.10 (sic!)
you are unconditionally moving files around; what happens if these files are not there?
you are unconditionally moving files in /etc. /etc is the place where the sysadmin adjust the system to their needs; you shall not delete or revert the configuration of the sysadmin. you should rather document how to properly configure the system (so the sysadmins can do it themselves); if you insist in "helping" by automatically configuring the system, you should use something like debconf
you assume that a lot of software is installed, and that this software is in your path. you should probably use fully qualified paths to the binaries you are using, e.g. /sbin/ifconfig rather than just ifconfig

Can i cache the output of a command on Linux from CLI?

I'm looking for an implementation of a 'cacheme' command, which 'memoizes' the output (stdout) of whatever has in ARGV. If it never ran it, it will run it and somewhat memorize the output. If it ran it, it will just copy the output of the file (or even better, both output and error to &1 and &2 respectively).
Let's suppose someone wrote this command, it would work like this.
$ time cacheme sleep 1 # first time it takes one sec
real 0m1.228s
user 0m0.140s
sys 0m0.040s
$ time cacheme sleep 1 # second time it looks for stdout in the cache (dflt expires in 1h)
#DEBUG# Cache version found! (1 minute old)
real 0m0.100s
user 0m0.100s
sys 0m0.040s
This example is a bit silly because it has no output. Ideally it would be tested on a script like sleep-1-and-echo-hello-world.sh.
I created a small script that creates a file in /tmp/ with hash of full command name and username, but I'm pretty sure something already exists.
Are you aware of any of this?
Note. Why I would do this? Occasionally I run commands that are network or compute intensive, they take minutes to run and the output doesn't change much. If I know it in advance I'd just prepend a cacheme <cmd>, go for dinner and when i'm back I can just rerun the SAME command over and over on the same machine and get the same answer in an instance.
Improved solution above somewhat by also adding expiry age as optional argument.
#!/bin/sh
# save as e.g. $HOME/.local/bin/cacheme
# and then chmod u+x $HOME/.local/bin/cacheme
VERBOSE=false
PROG="$(basename $0)"
DIR="${HOME}/.cache/${PROG}"
mkdir -p "${DIR}"
EXPIRY=600 # default to 10 minutes
# check if first argument is a number, if so use it as expiration (seconds)
[ "$1" -eq "$1" ] 2>/dev/null && EXPIRY=$1 && shift
[ "$VERBOSE" = true ] && echo "Using expiration $EXPIRY seconds"
CMD="$#"
HASH=$(echo "$CMD" | md5sum | awk '{print $1}')
CACHE="$DIR/$HASH"
test -f "${CACHE}" && [ $(expr $(date +%s) - $(date -r "$CACHE" +%s)) -le $EXPIRY ] || eval "$CMD" > "${CACHE}"
cat "${CACHE}"
I've implemented a simple caching script for bash, because I wanted to speed up plotting from piped shell command in gnuplot. It can be used to cache output of any command. Cache is used as long as the arguments are the same and files passed in arguments haven't changed. System is responsible for cleaning up.
#!/bin/bash
# hash all arguments
KEY="$#"
# hash last modified dates of any files
for arg in "$#"
do
if [ -f $arg ]
then
KEY+=`date -r "$arg" +\ %s`
fi
done
# use the hash as a name for temporary file
FILE="/tmp/command_cache.`echo -n "$KEY" | md5sum | cut -c -10`"
# use cached file or execute the command and cache it
if [ -f $FILE ]
then
cat $FILE
else
$# | tee $FILE
fi
You can name the script cache, set executable flag and put it in your PATH. Then simply prefix any command with cache to use it.
Author of bash-cache here with an update. I recently published bkt, a CLI and Rust library for subprocess caching. Here's a simple example:
# Execute and cache an invocation of 'date +%s.%N'
$ bkt -- date +%s.%N
1631992417.080884000
# A subsequent invocation reuses the same cached output
$ bkt -- date +%s.%N
1631992417.080884000
It supports a number of features such as asynchronous refreshing (--stale and --warm), namespaced caches (--scope), and optionally keying off the working directory (--cwd) and select environment variables (--env). See the README for more.
It's still a work in progress but it's functional and effective! I'm using it already to speed up my shell prompt and a number of other common tasks.
I created bash-cache, a memoization library for Bash, which works exactly how you're describing. It's designed specifically to cache Bash functions, but obviously you can wrap calls to other commands in functions.
It handles a number of edge-case behaviors that many simpler caching mechanisms miss. It reports the exit code of the original call, keeps stdout and stderr separately, and retains any trailing whitespace in the output ($() command substitutions will truncate trailing whitespace).
Demo:
# Define function normally, then decorate it with bc::cache
$ maybe_sleep() {
sleep "$#"
echo "Did I sleep?"
} && bc::cache maybe_sleep
# Initial call invokes the function
$ time maybe_sleep 1
Did I sleep?
real 0m1.047s
user 0m0.000s
sys 0m0.020s
# Subsequent call uses the cache
$ time maybe_sleep 1
Did I sleep?
real 0m0.044s
user 0m0.000s
sys 0m0.010s
# Invocations with different arguments are cached separately
$ time maybe_sleep 2
Did I sleep?
real 0m2.049s
user 0m0.000s
sys 0m0.020s
There's also a benchmark function that shows the overhead of the caching:
$ bc::benchmark maybe_sleep 1
Original: 1.007
Cold Cache: 1.052
Warm Cache: 0.044
So you can see the read/write overhead (on my machine, which uses tmpfs) is roughly 1/20th of a second. This benchmark utility can help you decide whether it's worth caching a particular call or not.
How about this simple shell script (not tested)?
#!/bin/sh
mkdir -p cache
cachefile=cache/cache
for i in "$#"
do
cachefile=${cachefile}_$(printf %s "$i" | sed 's/./\\&/g')
done
test -f "$cachefile" || "$#" > "$cachefile"
cat "$cachefile"
Improved upon solution from error:
Pipes output into the "tee" command which allows it to be viewed real-time as well as stored in the cache.
Preserve colors (for example in commands like "ls --color") by using "script --flush --quiet /dev/null --command $CMD".
Avoid calling "exec" by using script as well
Use bash and [[
#!/usr/bin/env bash
CMD="$#"
[[ -z $CMD ]] && echo "usage: EXPIRY=600 cache cmd arg1 ... argN" && exit 1
# set -e -x
VERBOSE=false
PROG="$(basename $0)"
EXPIRY=${EXPIRY:-600} # default to 10 minutes, can be overriden
EXPIRE_DATE=$(date -Is -d "-$EXPIRY seconds")
[[ $VERBOSE = true ]] && echo "Using expiration $EXPIRY seconds"
HASH=$(echo "$CMD" | md5sum | awk '{print $1}')
CACHEDIR="${HOME}/.cache/${PROG}"
mkdir -p "${CACHEDIR}"
CACHEFILE="$CACHEDIR/$HASH"
if [[ -e $CACHEFILE ]] && [[ $(date -Is -r "$CACHEFILE") > $EXPIRE_DATE ]]; then
cat "$CACHEFILE"
else
script --flush --quiet --return /dev/null --command "$CMD" | tee "$CACHEFILE"
fi
The solution I came up in ruby is this. Does anybody see any optimization?
#!/usr/bin/env ruby
VER = '1.2'
$time_cache_secs = 3600
$cache_dir = File.expand_path("~/.cacheme")
require 'rubygems'
begin
require 'filecache' # gem install ruby-cache
rescue Exception => e
puts 'gem filecache requires installation, sorry. trying to install myself'
system 'sudo gem install -r filecache'
puts 'Try re-running the program now.'
exit 1
end
=begin
# create a new cache called "my-cache", rooted in /home/simon/caches
# with an expiry time of 30 seconds, and a file hierarchy three
# directories deep
=end
def main
cache = FileCache.new("cache3", $cache_dir, $time_cache_secs, 3)
cmd = ARGV.join(' ').to_s # caching on full command, note that quotes are stripped
cmd = 'echo give me an argment' if cmd.length < 1
# caches the command and retrieves it
if cache.get('output' + cmd)
#deb "Cache found!(for '#{cmd}')"
else
#deb "Cache not found! Recalculating and setting for the future"
cache.set('output' + cmd, `#{cmd}`)
end
#deb 'anyway calling the cache now'
print(cache.get('output' + cmd))
end
main
An implementation exists here: https://bitbucket.org/sivann/runcached/src
Caches executable path, output, exit code, remembers arguments. Configurable expiration. Implemented in bash, C, python, choose whatever suits you.

Shell script help

I need help with two scripts I'm trying to make as one. There are two different ways to detect if there are issues with a bad NFS mount. One is if there is an issue, doing a df will hang and the other is the df works but there is are other issues with the mount which a find (mount name) -type -d will catch.
I'm trying to combine the scripts to catch both issues to where it runs the find type -d and if there is an issue, return an error. If the second NFS issue occurs and the find hangs, kill the find command after 2 seconds; run the second part of the script and if the NFS issue is occurring, then return an error. If neither type of NFS issue is occurring then return an OK.
MOUNTS="egrep -v '(^#)' /etc/fstab | grep nfs | awk '{print $2}'"
MOUNT_EXCLUDE=()
if [[ -z "${NFSdir}" ]] ; then
echo "Please define a mount point to be checked"
exit 3
fi
if [[ ! -d "${NFSdir}" ]] ; then
echo "NFS CRITICAL: mount point ${NFSdir} status: stale"
exit 2
fi
cat > "/tmp/.nfs" << EOF
#!/bin/sh
cd \$1 || { exit 2; }
exit 0;
EOF
chmod +x /tmp/.nfs
for i in ${NFSdir}; do
CHECK="ps -ef | grep "/tmp/.nfs $i" | grep -v grep | wc -l"
if [ $CHECK -gt 0 ]; then
echo "NFS CRITICAL : Stale NFS mount point $i"
exit $STATE_CRITICAL;
else
echo "NFS OK : NFS mount point $i status: healthy"
exit $STATE_OK;
fi
done
The MOUNTS and MOUNT_EXCLUDE lines are immaterial to this script as shown.
You've not clearly identified where ${NFSdir} is being set.
The first part of the script assumes ${NFSdir} contains a single directory value; the second part (the loop) assumes it may contain several values. Maybe this doesn't matter since the loop unconditionally exits the script on the first iteration, but it isn't the clear, clean way to write it.
You create the script /tmp/.nfs but:
You don't execute it.
You don't delete it.
You don't allow for multiple concurrent executions of this script by making a per-process file name (such as /tmp/.nfs.$$).
It is not clear why you hide the script in the /tmp directory with the . prefix to the name. It probably isn't a good idea.
Use:
tmpcmd=${TMPDIR:-/tmp}/nfs.$$
trap "rm -f $tmpcmd; exit 1" 0 1 2 3 13 15
...rest of script - modified to use the generated script...
rm -f $tmpcmd
trap 0
This gives you the maximum chance of cleaning up the temporary script.
There is no df left in the script, whereas the question implies there should be one. You should also look into the timeout command (though commands hung because NFS is not responding are generally very difficult to kill).

App Engine: Launching a script upon update/run

I'm working with App Engine and I'm thinking about using the LESS CSS extension in my next project. There's no good LESS CSS library written in Python so I went on with the original Ruby one which works great and out of the box. I'd like App Engine to execute lessc ./templates/css/style.less before running the development server and before uploading the files to the cloud. What is the best way to automate this? I'm thinking:
#run.sh:
lessc ./templates/css/style.less
.gae/dev_appserver.py --use_sqlite .
And
#deploy.sh
lessc ./templates/css/style.less
.gae/appcfg.py update .
Am I on the correct path or is there a more elegant way of doing things, perhaps at the appcfg.py level?
Thanks.
One option is to use the javascript version of Less and hence do the less-to-css conversion in the browser.. simply upload your less formatted file (see http://lesscss.org/ for details).
Alternately, I do the conversion (first with less, now I use sass) in a deploy script which does a number of things
checks that my source code control has no outstanding files checked out (uncommited changes)
joins and minifies my .js code (and runs jslint over it) into a single file
generates other content (including stamping the source code control version as a version number into certain key files and as a parameter on some files to avoid caching issues) so my main page pulls in scripts with URLs such as "allmysource.js?v=585".. the file might be static but the added params force cache invalidation
calls appcfg to perform the upload and checks the return code
makes some calls to the real site with wget to check the previously generated files are actually returned, by checking they're stamped with the expected version
applies another source code control tag to say that the intended version was successfully deployed
My script also accepts a "-preview" flag in which case it doesn't actually do the upload, but reports the version control comments for what's changed since the previous deployment.
me#here $ ./deploy -preview
Deployment preview...
Would deploy v596 to the production site (currently v593, previously v587)
594 Fix blah blah blah for X Y Z
595 New feature nah nah nah
596 Update help pages
This is pretty handy as a reminder of what I need to put in things like a changelog
I plan to also expand it so that I can, as part of my source code control, add any code that needs running once only when deployed (eg database schema changes) and know that it'll be automatically run when I next deploy a new version.
Essence of the script below as people asked... it doesn't show my "check code, generate, join, and minify" as that's another script... I realise that the original question was asking about that step of course :) but you can see where you'd add the call to generate CSS etc
#!/bin/sh
function abort () {
echo
echo "ERROR: $1"
echo "$2"
exit 99
}
function warn () {
echo
echo "WARNING: $1"
echo "$2"
}
# Overrides the Gentoo eselect mechanism to force the python version the GAE scripts expect
export EPYTHON=python2.5
# names of tags used to label bzr versions
CURR_DTAG=deployed
PREV_DTAG=prevDeployed
# command line options
PREVIEW=0
IGNORE_BZR=0
# These next few vars are set to values to identify my site, insert your own values here...
APPID=your_gae_appid_here
ADMIN_EMAIL=your_admin_email_address_here
SRCDIR=directory_to_deploy
CHECK_URL=url_of_page_to_retrive_that_does_upload_initialisation
for ARG; do
if [[ "$ARG" == "-preview" ]]; then
echo "Deployment preview..."
PREVIEW=1
fi
if [[ "$ARG" == "-force" ]]; then
echo "Ignoring the fact some files may not be committed to bzr..."
IGNORE_BZR=1
fi
done
echo
# check bzr for uncommited changed
BSTATUS=`bzr status`
if [[ "$BSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are uncommited changes - commit/revert/ignore all files before deploying" "$BSTATUS"
else
warn "There are uncommited changes" "$BSTATUS"
fi
fi
# get version of numbers of last deployed etc
currver=`bzr log -l1 --line | sed -e 's/: .*//'`
lastver=`bzr log -rtag:${CURR_DTAG} --line | sed -e 's/: .*//'`
prevver=`bzr log -rtag:${PREV_DTAG} --line | sed -e 's/: .*//'`
lastlog=`bzr log -l 1 --line gae/changelog | sed -e 's/: .*//'`
RELEASE_NOTES=`bzr log --short --forward -r $lastver..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastver "`
LOG_NOTES=`bzr log --short --forward -r $lastlog..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastlog "`
# Crude but old habit - BUGBUGBUG is a marker in the code for things to be fixed before deployment
echo "Checking code for outstanding issues before deployment"
BUGSTATUS=`grep BUGBUGBUG js/*js`
if [[ "$BUGSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are outstanding BUGBUGBUGs - fix them before deploying" "$BUGSTATUS"
else
warn "There are outstanding BUGBUGBUGs" "$BUGSTATUS"
fi
fi
echo
echo "Deploy v$currver to the production site (currently v$lastver, previously v$prevver)"
echo "$RELEASE_NOTES"
echo
if [[ "$currver" -gt "$lastlog" && "$lastver" -ne "$lastlog" ]]; then
echo "Changes since the changelog was last updated"
echo "$LOG_NOTES"
echo
fi
if [[ "$IGNORE_BZR" == "0" && $lastver -ge $currver ]]; then
abort "There don't appear to be any changes to deploy..."
fi
if [[ "$PREVIEW" == "1" ]]; then
exit 0
fi
$EPYTHON -c "import ssl" \
|| abort "$EPYTHON can't find ssl module for $EPYTHON - download it from pypi and install with the inbuilt setup.py"
# REMOVED - call to my script that calls jslint, generates files and compresses JS etc
# || abort "Generation of code failed"
/opt/google_appengine/appcfg.py --email=$ADMIN_EMAIL -v -A $APPID update $SRCDIR \
|| abort "Appcfg failed - upload presumably incomplete"
# move the tags to show we deployed properly
bzr tag -r $lastver --force ${PREV_DTAG}
bzr tag -r $currver --force ${CURR_DTAG}
echo
echo "Production site updated from v$lastver to v$currver (in turn from v$prevver)"
echo
echo "Now visiting $CHECK_URL to upload the source to the database"
# new version doesn't seem to always be there (may be caching by the webserver etc) to be uploaded into the database.. try again just in case
for cb in $RANDOM $RANDOM $RANDOM $RANDOM ; do
prodver=`wget $CHECK_URL?_cb=$cb -q -O - | perl -ne 'print $1 if /^\s*Rev #(\d+)\s*$/'`
if [[ "$currver" == "$prodver" ]]; then
echo "OK: New version $prodver successfully deployed"
exit 0
fi
echo "Retrying the upload of source to the database"
sleep 5
done
abort "The new source doesn't seem to be loading into the database" "Try 'wget $CHECK_URL?_cb=$RANDOM -q -O -'"
It's not particularly big or clever, but it automates the upload job

Resources