I'm working with App Engine and I'm thinking about using the LESS CSS extension in my next project. There's no good LESS CSS library written in Python so I went on with the original Ruby one which works great and out of the box. I'd like App Engine to execute lessc ./templates/css/style.less before running the development server and before uploading the files to the cloud. What is the best way to automate this? I'm thinking:
#run.sh:
lessc ./templates/css/style.less
.gae/dev_appserver.py --use_sqlite .
And
#deploy.sh
lessc ./templates/css/style.less
.gae/appcfg.py update .
Am I on the correct path or is there a more elegant way of doing things, perhaps at the appcfg.py level?
Thanks.
One option is to use the javascript version of Less and hence do the less-to-css conversion in the browser.. simply upload your less formatted file (see http://lesscss.org/ for details).
Alternately, I do the conversion (first with less, now I use sass) in a deploy script which does a number of things
checks that my source code control has no outstanding files checked out (uncommited changes)
joins and minifies my .js code (and runs jslint over it) into a single file
generates other content (including stamping the source code control version as a version number into certain key files and as a parameter on some files to avoid caching issues) so my main page pulls in scripts with URLs such as "allmysource.js?v=585".. the file might be static but the added params force cache invalidation
calls appcfg to perform the upload and checks the return code
makes some calls to the real site with wget to check the previously generated files are actually returned, by checking they're stamped with the expected version
applies another source code control tag to say that the intended version was successfully deployed
My script also accepts a "-preview" flag in which case it doesn't actually do the upload, but reports the version control comments for what's changed since the previous deployment.
me#here $ ./deploy -preview
Deployment preview...
Would deploy v596 to the production site (currently v593, previously v587)
594 Fix blah blah blah for X Y Z
595 New feature nah nah nah
596 Update help pages
This is pretty handy as a reminder of what I need to put in things like a changelog
I plan to also expand it so that I can, as part of my source code control, add any code that needs running once only when deployed (eg database schema changes) and know that it'll be automatically run when I next deploy a new version.
Essence of the script below as people asked... it doesn't show my "check code, generate, join, and minify" as that's another script... I realise that the original question was asking about that step of course :) but you can see where you'd add the call to generate CSS etc
#!/bin/sh
function abort () {
echo
echo "ERROR: $1"
echo "$2"
exit 99
}
function warn () {
echo
echo "WARNING: $1"
echo "$2"
}
# Overrides the Gentoo eselect mechanism to force the python version the GAE scripts expect
export EPYTHON=python2.5
# names of tags used to label bzr versions
CURR_DTAG=deployed
PREV_DTAG=prevDeployed
# command line options
PREVIEW=0
IGNORE_BZR=0
# These next few vars are set to values to identify my site, insert your own values here...
APPID=your_gae_appid_here
ADMIN_EMAIL=your_admin_email_address_here
SRCDIR=directory_to_deploy
CHECK_URL=url_of_page_to_retrive_that_does_upload_initialisation
for ARG; do
if [[ "$ARG" == "-preview" ]]; then
echo "Deployment preview..."
PREVIEW=1
fi
if [[ "$ARG" == "-force" ]]; then
echo "Ignoring the fact some files may not be committed to bzr..."
IGNORE_BZR=1
fi
done
echo
# check bzr for uncommited changed
BSTATUS=`bzr status`
if [[ "$BSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are uncommited changes - commit/revert/ignore all files before deploying" "$BSTATUS"
else
warn "There are uncommited changes" "$BSTATUS"
fi
fi
# get version of numbers of last deployed etc
currver=`bzr log -l1 --line | sed -e 's/: .*//'`
lastver=`bzr log -rtag:${CURR_DTAG} --line | sed -e 's/: .*//'`
prevver=`bzr log -rtag:${PREV_DTAG} --line | sed -e 's/: .*//'`
lastlog=`bzr log -l 1 --line gae/changelog | sed -e 's/: .*//'`
RELEASE_NOTES=`bzr log --short --forward -r $lastver..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastver "`
LOG_NOTES=`bzr log --short --forward -r $lastlog..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastlog "`
# Crude but old habit - BUGBUGBUG is a marker in the code for things to be fixed before deployment
echo "Checking code for outstanding issues before deployment"
BUGSTATUS=`grep BUGBUGBUG js/*js`
if [[ "$BUGSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are outstanding BUGBUGBUGs - fix them before deploying" "$BUGSTATUS"
else
warn "There are outstanding BUGBUGBUGs" "$BUGSTATUS"
fi
fi
echo
echo "Deploy v$currver to the production site (currently v$lastver, previously v$prevver)"
echo "$RELEASE_NOTES"
echo
if [[ "$currver" -gt "$lastlog" && "$lastver" -ne "$lastlog" ]]; then
echo "Changes since the changelog was last updated"
echo "$LOG_NOTES"
echo
fi
if [[ "$IGNORE_BZR" == "0" && $lastver -ge $currver ]]; then
abort "There don't appear to be any changes to deploy..."
fi
if [[ "$PREVIEW" == "1" ]]; then
exit 0
fi
$EPYTHON -c "import ssl" \
|| abort "$EPYTHON can't find ssl module for $EPYTHON - download it from pypi and install with the inbuilt setup.py"
# REMOVED - call to my script that calls jslint, generates files and compresses JS etc
# || abort "Generation of code failed"
/opt/google_appengine/appcfg.py --email=$ADMIN_EMAIL -v -A $APPID update $SRCDIR \
|| abort "Appcfg failed - upload presumably incomplete"
# move the tags to show we deployed properly
bzr tag -r $lastver --force ${PREV_DTAG}
bzr tag -r $currver --force ${CURR_DTAG}
echo
echo "Production site updated from v$lastver to v$currver (in turn from v$prevver)"
echo
echo "Now visiting $CHECK_URL to upload the source to the database"
# new version doesn't seem to always be there (may be caching by the webserver etc) to be uploaded into the database.. try again just in case
for cb in $RANDOM $RANDOM $RANDOM $RANDOM ; do
prodver=`wget $CHECK_URL?_cb=$cb -q -O - | perl -ne 'print $1 if /^\s*Rev #(\d+)\s*$/'`
if [[ "$currver" == "$prodver" ]]; then
echo "OK: New version $prodver successfully deployed"
exit 0
fi
echo "Retrying the upload of source to the database"
sleep 5
done
abort "The new source doesn't seem to be loading into the database" "Try 'wget $CHECK_URL?_cb=$RANDOM -q -O -'"
It's not particularly big or clever, but it automates the upload job
Related
i have UNI work where i have to create a shell script that monitors a directory with a few text files that can inform you (can be manual check to be informed) if they have been modified. honestly im not sure where to start and im bad at linux and cant find anything to help. i can only use standard tools in ubuntu. any help would be great. thankyou
update - this is what i have so far and i need a way to verify that the values printed are the same after altering a file (if they are not the same then print whats files have been changed)
also sorry first time using the site trying to learn..
#!/bin/sh
echo "press 1 to check - press 2 to exit"
while :
do
read INPUT_STR
case $INPUT_STR in
1)
echo "checking sums"
md5sum Target/bob
md5sum Target/bec
md5sum Target/john
md5sum Target/mary
md5sum Target/mike
;;
2)
break
;;
*)
echo "incorrect input"
esac
done
echo "thankyou for using IDS"
You mean something like that?
#!/bin/bash
WORKDIR=/home
TARGETS=(
"$WORKDIR/bob"
"$WORKDIR/bec"
"$WORKDIR/john"
"$WORKDIR/mary"
"$WORKDIR/mike"
)
for target in "${TARGETS[#]}"; do
md5file="${target##*/}.md5"
if [[ -e "$md5file" ]]; then
md5sum --quiet -c "$md5file"
else
echo "create md5sum referenz file $md5file"
md5sum "$target"/* > "$md5file"
fi
done
At first run it creates for every directory a reference file. On the second run the directory will be compare with reference file. Modifications will be shown. When you delete one reference file, it will be created again on the next run.
explanation
# loop over the array with the targets
for target in "${TARGETS[#]}"; do
# remove all directories except the last, append .md5
# and assign it to the variable md5file
md5file="${target##*/}.md5"
# if md5file exists, use it as reference and check if
# modifications where happend
if [[ -e "$md5file" ]]; then
# use the md5file as reference and only show
# modified files, ignore ok messages
md5sum --quiet -c "$md5file"
else
# when no md5file exists
echo "create md5sum referenz file $md5file"
# use target, append /* to it and run md5sum
# redirect output to md5file (reference file for next run)
md5sum "$target"/* > "$md5file"
fi
done
I have a bash script that applies all git patches in a directory (See bottom for the script). This script is run everytime I deploy my website on my server.
I'm now running into an issue where after a few weeks the patch throws an error and exits out the script with error "patch does not apply". Does anyone know if there is a way to ignore broken/old patches and possible just show an error that the script no longer works rather than completely exit out the script causing my website deployment to fail?
for file in ${PROJECT_PATH}/${PATCH_DIR}/*.patch; do
if [[ -e ${file} ]]; then
echo -n "Applying patch '${file}' ... "
${RUN_AS} git ${GIT_PROJECT_PATH} apply --directory="${PROJECT_PATH}" --unsafe-paths "${file}"
echo "Done"
fi
done
I don't see any reason why it would stop applying the patches. If one failed, you might get some error output and then you said "Done" (which could be a little misguiding, I think) and then the for would continue.
I guess for starters you need to control if the patch was successful or not. something like this (adjust to your needs):
for file in ${PROJECT_PATH}/${PATCH_DIR}/*.patch; do
if [[ -e ${file} ]]; then
echo -n "Applying patch '${file}' ... "
${RUN_AS} git ${GIT_PROJECT_PATH} apply --directory="${PROJECT_PATH}" --unsafe-paths "${file}"
if [ $? -ne 0 ]; then
# there was an error
echo git apply of patch $file failed on $GIT_PROJECT_PATH/$PROJECT_PATH
else
echo "Done"
fi
fi
done
Or something along those lines.
I'm wondering if I've done this right. I'm trying to learn BASH and really want to learn the "Best Practices" the first time, so I don't adopt the sloppy/easy way.
What I'm wondering, can I nest an IF/THEN statement like I've done below? Why or why not? Would the block below be served better by using an elif instead?
Lastly, I was hoping someone could shed some light for me on the use of "${foo}" and "$(bar)" ... curly braces or parenthesis? I've (so far) used curly braces when I'm defining a variable "foo='bar'" is later called as "${foo} and parenthesis when I'm capturing a command "foo=$(find . -type f -name bar)" would be called as "$foo" ... or maybe I'm just way off and doing the same thing twice, I don't know ... I'd love to hear what you've all got to say! :D
# Downloading the script bundle
echo "Lets get the script bundle and get to work!"
wget http://place.to.get/att.tar
# Logic switch, checking if the TAR bundle exists. If it does
# verify the MD5 Checksum (to prevent corruption).
# If verfied, then un-tar the bundle in our working directory
# otherwise, exit with an error code, otherwise
if [[ -f att.tar ]]
then
echo "Okay, we have the bundle, lets verify the checksum"
sum=$(md5sum /root/att/att.tar | awk '{print $1}')
if [[ $sum -eq "xxxxINSERT-CHECKSUM-HERExxxx" ]]
then
tar -xvf att.tar
else
clear
echo "Couldn't verify the MD5 Checksum, something went wrong" | tee /tmp/att.$time.log
sleep 0.5
exit 1;
fi
else
clear
echo "There was a problem getting the TAR bundle, exiting now ..." | tee /tmp/att.$time.log
sleep 0.5
exit 1;
fi
Overall comments
Nothing wrong with nested "if's," but early exit would be clearer
cut is cheaper than awk, but read is cheaper still
Simple string equality tests are marginally cheaper with "[" rather than "[["
Write error messages to STDERR
Use read and < <() rather than $( | cut -f1 -d' ') because it avoids a pipe and second fork/exec
Use functions
A simplified version
bail () {
clear
echo "${#}" | tee /tmp/att.${time}.log >&2
exit 1
}
# Downloading the script bundle
echo "Lets get the script bundle and get to work!" >&2
wget http://place.to.get/att.tar || bail "There was a problem getting the TAR bundle, exiting now ..."
sum=''
read sum rest < <(md5sum /root/att/att.tar)
[ $sum == "xxxxINSERT-CHECKSUM-HERExxxx" ] || bail "Couldn't verify the MD5 Checksum, something went wrong"
tar -xvf att.tar || bail "Extract failed"
I'm trying to create a shell script that I will download the latest Atomic gotroot rules to my server, unpack them, copy them to the correct folder, etc.,
I've been reading shell tutorials and forum posts for most of the day and the syntax escapes me for some of these. I have run all these commands and I know they work if I manually run them.
I know I need to develop some error checking, but I'm just trying to get the commands to run correctly. The main problem at the moment is the syntax of the wget commands, i've got errors about missing semi-colons, divide by zero, unsupported schemes - I've tried various quoting (single and double) and escaping - / " characters in various combinations.
Thanks for any help.
The raw wget command is
wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"
#!/bin/sh
update_modsec_rules(){
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/bin/apache2ctl
TXT="Script Run Finished"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET_OPTS='--user=jim --password=xxx-yyy-zzz'
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
# change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
rm -f VERSION*
# wget to download VERSION file
$wget ${WGET_OPTS} "${URL_BASE}/VERSION"
# get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=$MODSEC_VERSION
echo $TARGET_DATE
# wget to download current archive
$wget ${WGET_OPTS} "${URL_BASE}/modsec-${TARGET_DATE}.tar.gz"
# extract archive
echo "extracting files . . . "
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz
echo "copying files . . . "
cp -uv $EXISTING_FILES $TARGET_DIR
echo $TXT
}
update_modsec_rules $# 2>&1 | tee -a /var/asl/modsec_update.log
RESTART_APACHE="/usr/local/cpanel/scripts/restartsrv httpd"
$RESTART_APACHE
Here are some guidelines to use when writing shell scripts.
Always quote variables when you use them. This helps avoid the possibility of misinterpretation. (What if a filename contains a space?)
Don't trust fileglobbing on commands like rm. Use for loops instead. (What if a filename starts with a hyphen?)
Avoid subshells when possible. Your lines with backquotes make me itchy.
Don't exec if you can help it. And especially don't expect any parts of your script after your exec to actually get run.
I should point out that while your shell may be bash, you've specified /bin/sh for execution of this script, so it is NOT a bash script.
Here's a rewrite with some error checking. Add salt to taste.
#!/bin/sh
# Linux
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/sbin/apache2ctl
# FreeBSD
#wget=/usr/local/bin/wget
#tar=/usr/bin/tar
#apachectl=/usr/local/sbin/apachectl
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES_DIR="/var/asl/updates/modsec/"
EXISTING_ARCH="/var/asl/updates/"
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
WGET_OPTS='--user="jim" --password="xxx-yyy-zzz"'
if [ ! -x "$wget" ]; then
echo "ERROR: No wget." >&2
exit 1
elif [ ! -x "$apachectl" ]; then
echo "ERROR: No apachectl." >&2
exit 1
elif [ ! -x "$tar" ]; then
echo "ERROR: Not in Kansas anymore, Toto." >&2
exit 1
fi
# change to working directory and cleanup any downloaded files
# and extracted rules in modsec/ directory
if ! cd "$WORKING_DIR"; then
echo "ERROR: can't access working directory ($WORKING_DIR)" >&2
exit 1
fi
# Delete each file in a loop.
for file in "$EXISTING_FILES_DIR"/* "$EXISTING_ARCH_DIR"/modsec-*; do
rm -f "$file"
done
# Move old VERSION out of the way.
mv VERSION VERSION-$$
# wget1 to download VERSION file (replaces WGET1)
if ! $wget $WGET_OPTS $URL_BASE}/VERSION; then
echo "ERROR: can't get VERSION" >&2
mv VERSION-$$ VERSION
exit 1
fi
# get current MODSEC_VERSION from VERSION file and save as variable,
# but DON'T blindly trust and run scripts from an external source.
if grep -q '^MODSEC_VERSION=' VERSION; then
TARGET_DATE="`sed -ne '/^MODSEC_VERSION=/{s/^[^=]*=//p;q;}' VERSION`"
echo "Target date: $TARGET_DATE"
fi
# Download current archive (replaces WGET2)
if ! $wget ${WGET_OPTS} "${URL_BASE}/modsec-$TARGET_DATE.tar.gz"; then
echo "ERROR: can't get archive" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
# extract archive
if [ ! -f "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz" ]; then
echo "ERROR: I'm confused, where's my archive?" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
tar zxvf "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz"
for file in "$EXISTING_FILES_DIR"/*; do
cp "$file" "$TARGET_DIR/"
done
# So far so good, so let's restart apache.
if $apachectl configtest; then
if $apachectl restart; then
# Success!
rm -f VERSION-$$
echo "$TXT"
else
echo "ERROR: PANIC! Apache didn't restart. Notify the authorities!" >&2
exit 3
fi
else
echo "ERROR: Apache configs are broken. We're still running, but you'd better fix this ASAP." >&2
exit 2
fi
Note that while I've rewritten this to be more sensible, there is certainly still a lot of room for improvement.
You have two options:
1- changing this to
WGET1=' --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
then run
wget $WGET1 same to WGET2
Or
2- encapsulating $WGET1 with backquotes ``.
e.g.:
`$WGET`
This applies to any command your executing out of a variable.
Suggested changes:
#!/bin/sh
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET1='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
WGET2='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/modsec-$TARGET_DATE.tar.gz"'
## change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
## wget1 to download VERSION file
`$WGET1`
## get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=`echo $MODSEC_VERSION`
## WGET2 command to download current archive
`$WGET2`
## extract archive
tar zxvf $WORKING_DIR/modsec-$TARGET_DATE.tar.gz
cp $EXISTING_FILES $TARGET_DIR
## restart server
exec '/usr/local/cpanel/scripts/restartsrv_httpd' $*;
Pro Tip: If you need string substitution, using ${VAR} is much better to eliminate ambiguity, e.g.:
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz
I'm fairly new to using bash and was trying to create an autograder script for running some test cases. Currently my bash script seems to be acting strangely; when I have the -e flag set bash will just exit when a diff has a positive size, and when the -e flag is not set the script ignores any differences in the diff files and says that all tests passed.
The script exits immediately after the "write_diff_out=...." command, the next line is not printed. I've only included the diffing portion of the script as everything else runs fine (the files all exist as well).
# Validate outputs and print results
echo "> Comparing current build's final memory output with golden memory output...";
for file in `ls test_progs`;
do
file=$(echo $file | cut -d '.' -f1);
echo "$file";
write_diff_out=$(diff ./log/$file.writeback.out ./log/$file.writeback.gold.out > ./diff/$file.writeback.diff);
echo "Finished write_diff";
program_diff_out=$(diff -u <(grep -E '###' ./log/$file.program.out) <(grep -E '###' ./log/$file.program.gold.out) > ./diff/$file.program.diff);
echo "Finished program diff";
if [ -z "$write_diff_out" ] && [ -z "$program_diff_out" ]; then
printf "%20s:\e[0;32mPASSED\e[0m\n" "$file";
else
printf "%20s:\e[0;31mFAILED\e[0m\n" "$file";
fi
done
echo "> Done comparing test outputs.";
Feel free to suggest a better way of formatting the diff commands as well, I know there are different methods of writing them.
I don't exactly know what's your problem, but I have rewritten your script to conform to some best practices. Perhaps it will work better.
#!/bin/bash
# Debugging mode: prints every command as executed, remove when uneeded
set -x
# Validate outputs and print results
echo "> Comparing current build's final memory output with golden memory output..."
cd test_progs
for file in *; do
file="$(echo "$file" | sed 's/\.[^.]*$//')"
echo "$file"
# will PASS when both diffs return non-zero
if ! diff "log/$file.writeback.out" \
"log/$file.writeback.gold.out" > \
"diff/$file.writeback.diff" && \
! diff -u <(grep -E ### "log/$file.program.out") \
<(grep -E ### "log/$file.program.gold.out") > \
"diff/$file.program.diff"; then
printf '%20s:\e[0;32mPASSED\e[0m\n' "$file"
else
printf '%20s:\e[0;31mFAILED\e[0m\n' "$file"
fi
done
echo "> Done comparing test outputs."
It avoids parsing ls, use quotes where it is due, used [[ instead of [ (you don't need to quote variables inside of [[), and it tests if the written file is empty instead of storing something at a variable.
If you really wanted to store diff's output in a variable, you would do this:
write_diff_out="$(diff "log/$file.writeback.out" "log/$file.writeback.gold.out" | tee "diff/$file.writeback.diff")"
Then $write_diff_out would contain the same data the diff/$file.writeback.diff file has.
EDIT: edit my answer a bit, to implement some of the things in the comments.