How do I cache dependencies without having knowledge of its version or when it changed? - caching

I have written some code that uses a shell script to download and install a toolchain (that is not mine, but is released on GitHub itself in a .tar.gz form). What I wanted to do was that all the artifacts in the latest release of that toolchain be downloaded and saved to a directory which is cached through GitHub actions. And when there is any release newer than the one the cache already has, i.e. tag of latest release is not equal to current downloaded release, download newer one and update the existing cache. But reading more of the docs, I found that caches can not be changed. Here is what I currently have in the script:
#!/bin/sh
LATEST_API_URL="https://api.github.com/repos/.../.../releases/latest"
LATEST_TOOLCHAINS_TAG=$(curl -s "$LATEST_API_URL" | grep "tag_name.*" | cut -d: -f2 | cut -d\" -f2)
TOOLCHAINS_PACKED_SAVE_DIR="../toolchains_packed"
ROOT_DIR="$(dirname "$(readlink -f "$0")")"
cd "$TOOLCHAINS_PACKED_SAVE_DIR"
if [ ! -f "./toolchains_tag.metadata" ]; then
echo 'NEW FILE' > "./toolchains_tag.metadata"
fi
if [ "$LATEST_TOOLCHAINS_TAG" != "$(cat "./toolchains_tag.metadata")" ]; then
DOWNLOADS_URLS=$(curl -s "$LATEST_API_URL" | grep "browser_download_url.*tar.gz" | cut -d: -f2,3 | tr -d \")
wget $DOWNLOADS_URLS || { printf "wget failed: Please read above errors.\nExiting!\n"; exit 1; }
echo "$LATEST_TOOLCHAINS_TAG" > "./toolchains_tag.metadata"
fi
for tar_pkg in *.tar.gz; do
[ ! -f "$tar_pkg" ] && { printf "No *.tar.gz found.\nExiting\n"; exit 1; }
CURR_COMP=$(echo "$tar_pkg" | cut -d. -f1)
COMPILERS_LIST="$COMPILERS_LIST$CURR_COMP\n"
sudo tar -xzvf "$tar_pkg" -C /
done
printf "$COMPILERS_LIST" > "./compilers_list.metadata"
cd $ROOT_DIR
And in the workflow file:
...
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Create toolchains Directory
run: mkdir ../toolchains_packed
- name: Get Cached toolchains Directory
uses: actions/cache#v3.2.3
with:
path: ../toolchains_packed
key: toolchains_packed
- name: Setup toolchains
run: ./path/to/the_script.sh
...
NOTE: This code works successfully in fetching the latest release, and as intended also doesn't fetch one when there is no update available. I only ask for a way using which I can replace older cache with newer one.
How I previously understood caching on GitHub was that caches automatically updated over different runs of a workflow. But now, what I read is not only that caches do not update automatically, they can never update. As a result, the current scenario is that once the GitHub cache is generated, it will never be updated again, and the script will have to download the packages every time, making the entire point of caching null. So, I usually also have to keep an eye on latest release of the toolchain and delete the cache manually if update available.
Personal Thoughts on the Issue: I have thoughts whether I can name my caches using the tag present at ../toolchains/toolchains_tag.metadata. But that too is present in the cache, and hence I don't know how can I do this.
Final Question: How can I update my code such that I do not have to do any manual stuff for updating the cache every time there is a new release available for the toolchain?

I got my answer. I tried viewing some real life examples from different repos on GitHub and taking inspiration from #Azeem's comments, made some changes which worked well. Here is the edited script:-
#!/bin/sh
# The code is split into three parts:-
# 1. Code to check for available update ('-c' cmd line option)
# 2. Code to download available update ('-d' cmd line option) -- ran only
# when '-c' returned with code '1'.
# 3. Code to install toolchains ('-i' cmd line option)
LATEST_API_URL="https://api.github.com/repos/.../releases/latest"
TOOLCHAINS_PACKED_SAVE_DIR="../toolchains_packed"
ROOT_DIR="$(dirname "$(readlink -f "$0")")"
LATEST_TOOLCHAINS_TAG="$(curl -s "$LATEST_API_URL" | grep "tag_name" | cut -d: -f2 | cut -d\" -f2)"
cd "$TOOLCHAINS_PACKED_SAVE_DIR"
case $1 in
-c)
UPDATE_AVAILABLE=0
if [ ! -f ./toolchains_tag.metadata ]; then
echo 'NEW FILE' > ./toolchains_tag.metadata
UPDATE_AVAILABLE=1
fi
if [ "$LATEST_TOOLCHAINS_TAG" != "$(cat ./toolchains_tag.metadata)" ]; then
rm -rf ./*
UPDATE_AVAILABLE=1
fi
exit $UPDATE_AVAILABLE
;;
-d)
TC_DOWNLOADS_URLS="$(curl -s "$LATEST_API_URL" | grep "browser_download_url" | cut -d: -f2,3 | tr -d \")"
echo "wget -t 3 $TC_DOWNLOADS_URLS"
wget -t 3 $TC_DOWNLOADS_URLS || { printf "wget failed: Please read above errors.\nExiting!\n"; exit 1; }
echo "$LATEST_TOOLCHAINS_TAG" > ./toolchains_tag.metadata
;;
-i)
for tar_pkg in *.tar.gz; do
[ ! -f "$tar_pkg" ] && { printf "No *.tar.gz found.\nExiting\n"; exit 1; }
CURR_COMP=$(echo "$tar_pkg" | cut -d. -f1)
COMPILERS_LIST="$COMPILERS_LIST$CURR_COMP\n"
sudo tar -xzf "$tar_pkg" -C /
done
printf "$COMPILERS_LIST" > "./compilers_list.metadata"
;;
esac
cd $ROOT_DIR
exit 0
and the edited workflow file:-
...
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Create toolchains Directory
run: |
mkdir -p toolchains_packed
ln -s `pwd`/toolchains_packed ../toolchains_packed
- name: Get Cached toolchains Directory
uses: actions/cache#v3.2.3
with:
path: toolchains_packed
key: toolchains_packed-${{ github.sha }}
restore-keys: toolchains_packed-
- name: Check For toolchains' Update
id: toolchains_update_check
run: |
if ! ./build_helper/github/setup_toolchains.sh -c; then
echo "update_available=true" >> $GITHUB_OUTPUT
else
echo "update_available=false" >> $GITHUB_OUTPUT
fi
- name: Delete toolchains' Cache
if: steps.toolchains_update_check.outputs.update_available == 'true'
uses: snnaplab/delete-branch-cache-action#v1.0.0
- name: Download toolchains' Update
if: steps.toolchains_update_check.outputs.update_available == 'true'
run: ./build_helper/github/setup_toolchains.sh -d
- name: Install toolchains
run: ./build_helper/github/setup_toolchains.sh -i
...
Explanation:
I changed my script to make it capable of performing three distinct tasks:-
Check if the cache was latest (or if there was even any cache)
Download update (the workflow runs it only if (1) returns something which would tell it that update is available -- exit code 1 in this case)
Install toolchain (ran everytime -- caching is done for packed .tar.gzs only).
Also, I changed my workflow to make it do these things (in order):-
Get cache [NOTE: As ...-${{ github.sha }} is used as primary key of the cache, the code will fail to update cache if workflow is run without any new commit.]
Check for available update availability [by making use of the script]
(3) and (4) are run only if (2) succeeded, i.e. the script returned with exit code 1.
Delete cache [NOTE: For sake of simplicity, I used snnaplab/delete-branch-cache-action#v1.0.0 which deletes all existing caches in the branch. If you want to preserve any cache, please use gh-actions-cache.]
Download update
Install update

Related

Remove git files/directories older than x days via GitHub Action

We use a gh-pages branch in our repository to host a static website and frequently commit new information to this branch. Those files often get stale, as we push to a subdirectory per feature branch in the same repository.
The directory structure in my gh-pages branch is similar to the following:
.
|-- README.md
|-- JIRA-1234-feature
| `-- graph
|-- JIRA-4567-bugfix
| `-- graph
|-- JIRA-7890-branch-name
| `-- testing
I want to remove directories via a GitHub actions for which the last update was more than 5 days ago.
I naively tried to remove them via find /path/to/files* -mtime +5 -exec rm {} ;, but the operating system obviously uses the clone date as the last modified time.
I also found
git ls-tree -r --name-only HEAD | while read filename; do
echo "$(git log -1 --format="%ad" --date="short" -- $filename) $filename"
done
which prints the last git update and the file name like this:
2023-01-12 JIRA-1234-test/index.html
2023-01-12 JIRA-1234-test/static/test.css
I don't know how to trigger file removal commands from this list, though.
How would I have to modify the following action to remove the old files?
name: Prune GH Pages branch
on:
workflow_dispatch:
jobs:
upload:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout#v3
with:
ref: gh-pages
- name: Cleanup
run: |
# TODO: REMOVE FILES HERE
git ls-tree -r --name-only HEAD | while read filename; do
echo "$(git log -1 --format="%ad" --date="short" -- $filename) $filename"
done
- name: Commit & Push
run: |
if [ $(git status --porcelain | wc -l) -eq "0" ]; then
echo "git repo is clean."
else
git add -A
git commit -m "branch cleanup"
git push
fi
Unfortunately, I didn't find a way to make a nice one-liner for the requirement. We need the following bash script. I have commented all the important steps.
#!/bin/bash
# Validate if $1 is a positive number days
[[ $1 != +([[:digit:]]) ]] &&
echo "$1: The script has to be run with positive number argument" && exit 1
# Get the X days ago timestamp
X_DAYS_AGO_TIMESTAMP=$(date -d "$1 days ago" +%s)
# Iterate over all files in the repository
for file in $(git ls-files); do
echo -n "."
# Get the timestamp of the last commit that modified this file
LAST_MODIFIED_TIMESTAMP=$(git log -1 --format="%at" -- "$file")
# If the file hasn't been modified within the last $1 days
if [ "$LAST_MODIFIED_TIMESTAMP" -lt "$X_DAYS_AGO_TIMESTAMP" ]; then
# Remove the file from the repository
echo -e "\nRemoving $file last modified at $(date -d "#$LAST_MODIFIED_TIMESTAMP")"
git rm --quiet "$file"
fi
done
# Commit the changes (if any)
if ! git diff --exit-code --quiet --staged; then
git commit -m "Remove files not modified within the last $1 days"
else
echo "No files removed"
fi
I can elaborate if something is not clear enough.

Github Actions: Why an intermediate command failure in shell script would cause the whole step to fail?

I have a step in a Github Actions job:
- name: Check for changes
run: |
diff=$( git diff --name-only 'origin/main' )
changed_files=$( echo $diff | grep -c src/my_folder ) # this fails
# more script lines
# that are unrelated
This fails with Error: Process completed with exit code 1. only if grep finds nothing.
If there are matches in $diff, then this step works as intended. But of course it also needs to work without matches.
I can run this locally or inside a script without a problem, exit code is always 0 (on a Mac).
I fail to understand what the problem is. After some hours of trial and error and research I learned that apparently grep is tricky in Github actions, but I found no hint or proper documentation how I am supposed to solve this exact case.
If I change my failing line to
echo $( echo $diff | grep -c src/my_folder ) # this works and prints out the result
this gets executed without problems.
But how do I get my grep output into my variable even when there are no findings?
According to the doc, by default Github Actions enables set -e to run the step's commands. This is why an intermediate command failure may cause the whole step to fail. To take full control of your step's commands you can do like this:
- name: Check for changes
shell: bash {0}
run: |
diff=$( git diff --name-only 'origin/main' )
changed_files=$( echo $diff | grep -c src/my_folder )
# ...
# ...
Or you can just disable the default set -e at the beginning of the step's script:
- name: Check for changes
run: |
set +x
diff=$( git diff --name-only 'origin/main' )
changed_files=$( echo $diff | grep -c src/my_folder )
# ...
# ...
I would suggest to use the dorny/paths-filter action, like:
- uses: dorny/paths-filter#v2
id: changes
with:
filters: |
src:
- 'src/my_folder/**'
# run only if some file in 'src/my_folder' folder was changed
- if: steps.changes.outputs.src == 'true'
run: ...
Check the output section for details about changes
I don't know what problem grep has in Github actions but you can try something like :
...
changed_files=$( [ ! -e $diff ] && { echo $diff | grep -c src/my_folder } || echo 0 )
...
This way grep wouldn't run if $diff is empty. It would just store 0 in $changed_files

Installing Jekyll without root

I want to set up a jekyll blog on a shared server. When I try to install Jekyll I get "You don't have write permissions". How do I fix this without root or sudo?
More detail:
I have space on a shared server and don't have root access. I couldn't install Ruby, though the hosting company installed it upon my request.
When I try to install Jekyll I use
user#hosting.org [~]# gem install jekyll
and this is the response I get:
ERROR: While executing gem ... (Gem::FilePermissionError)
You don't have write permissions into the /usr/lib/ruby/gems/1.8 directory.
I have seen different suggestions for changing the GEMPATH which I have tried including
export GEM_PATH=/home/user/something
But even after doing so
gem env
still results in
GEM PATHS:
- /usr/lib/ruby/gems/1.8
- /home/user/.gem/ruby/1.8
Any tips? Is it possible to install jekyll without root or sudo priviliges or am I just making some rookie PATH error?
I didn't find the answer for a while. on the #jekyll IRC a user pointed me at the Arch wiki and I discovered that the thing is to force the install as a single user:
gem install jekyll --user-install
This worked for me in MAC
1.Place the gems in user's home folder.Add below commands in .bashrc or .zshrc
export GEM_HOME=$HOME/gems
export PATH=$HOME/gems/bin:$PATH
2.Use installation command
gem install jekyll bundler
3.Verify Installation
jekyll -v
Use the documentation for detailed reference
https://jekyllrb.com/docs/troubleshooting/#no-sudo
The reason for that is the default Ruby that gets shipped with Mac (I am assuming this, but this is true for some distributions of Linux as well) installs gems to a user folder that needs permissions to modify the contents. This is not a Ruby error to be precise.
That said, since Ruby 1.8.7 is not supported any more, you'd be better off avoiding using it and using one of the alternative ruby version managing tools like chruby or rvm or rbenv. (I'd vote for chruby btw). The documentation is pretty dense for all those. The authors are quite helpful in resolving issues if you do end up having one or more.
... am I just making some rookie PATH error?
Yes, I think so... I'm not sure why you're assigning GEM_PATH, I haven't needed to, and think ya perhaps wanted GEM_HOME instead. Though things may have changed since then and the current now that this'll be posted at.
TLDR
I usually write something such as...
## Ruby exports for user level gem & bundle installs
export GEM_HOME="${HOME}/.gem"
export PATH="${GEM_HOME}/bin:${PATH}"
... to somewhere like ~/.bash_aliases for each user that'll be authenticating to a server.
Then within any git-shell-commands script, for an authenticated user that makes use of Gems, source the above settings prior.
I want to set up a jekyll blog on a shared server. When I try to install Jekyll I get "You don't have write permissions". How do I fix this without root or sudo?
Might be worth checking out a project I've published a little while ago. It's been written and tested on Linux systems with Bash versions >= 4 if you sort out Mac feel free to make a PR. Otherwise, for shared servers, the least amount of fuss may be had by sticking with Xenial, from Ubuntu, or the freshest Raspberry flavored Debian.
Here's some snippets that should aid in automating an answer to your question...
/usr/local/etc/Jekyll_Admin/shared_functions/user_mods/jekyll_gem_bash_aliases.sh
#!/usr/bin/env bash
jekyll_gem_bash_aliases(){ ## jekyll_gem_bash_aliases <user>
local _user="${1:?No user name provided}"
local _home="$(awk -F':' -v _user="${_user}" '$0 ~ "^" _user ":" {print $6}' /etc/passwd)"
if [ -f "${_home}/.bash_aliases" ]; then
printf '%s/.bash_aliases already exists\n' "${_home}" >&2
return 1
fi
## Save new user path variable for Ruby executables
su --shell "$(which bash)" --command 'touch ${HOME}/.bash_aliases' --login "${_user}"
tee -a "${_home}/.bash_aliases" 1>/dev/null <<'EOF'
## Ruby exports for user level gem & bundle installs
export GEM_HOME="${HOME}/.gem"
export PATH="${GEM_HOME}/bin:${PATH}"
EOF
su --shell "$(which bash)" --command 'chmod u+x ${HOME}/.bash_aliases' --login "${_user}"
printf '## %s finished\n' "${FUNCNAME[0]}"
}
The above is used by one of three scripts that make use of sudo level permissions, specifically jekyll_usermod.sh... but don't get too caught-up with grokking all the contortions that I'm asking of Bash, because the moral of the above function's story is that it writes something like...
## Ruby exports for user level gem & bundle installs
export GEM_HOME="${HOME}/.gem"
export PATH="${GEM_HOME}/bin:${PATH}"
... to somewhere like /srv/bill/.bash_aliases which'll get sourced in git-shell-commands scripts and/or other shared functions for account setup like the following...
/usr/local/etc/Jekyll_Admin/shared_functions/user_mods/jekyll_user_install.sh
#!/usr/bin/env bash
jekyll_user_install(){ ## jekyll_user_install <user>
local _user="${1:?No user name provided}"
su --shell "$(which bash)" --login "${_user}" <<'EOF'
source "${HOME}/.bash_aliases"
mkdir -vp "${HOME}"/{git,www}
## Initialize Jekyll repo for user account
_old_PWD="${PWD}"
mkdir -vp "${HOME}/git/${USER}"
cd "${HOME}/git/${USER}"
git init .
git checkout -b gh-pages
_ruby_version="$(ruby --version)"
printf 'Ruby Version: %s\n' "${_ruby_version}"
_ruby_version="$(awk '{print $2}' <<<"${_ruby_version%.*}")"
_ruby_version_main="${_ruby_version%.*}"
_ruby_version_sub="${_ruby_version#*.}"
if [[ "${_ruby_version_main}" -ge '2' ]] && [[ "${_ruby_version_sub}" -ge '1' ]]; then
gem install bundler -v '< 2'
gem install jekyll -v '3.8.5'
bundle init
bundle install --path "${HOME}/.bundle/install"
bundle add jekyll-github-metadata github-pages
bundle exec jekyll new --force --skip-bundle "${HOME}/git/${USER}"
bundle install
else
echo 'Please see to installing Ruby verion >= 2.4' >&2
echo 'Hints may be found at, https://jekyllrb.com/docs/installation/' >&2
fi
git config receive.denyCurrentBranch updateInstead
cat >> "${HOME}/git/${USER}/.gitignore" <<EOL
# Ignore files and folders generated by Bundler
Bundler
vendor
.bundle
Gemfile.lock
EOL
git add --all
git -c user.name="${USER}" -c user.email="${USER}#${HOSTNAME}" commit -m "Added files from Bundler & Jekyll to git tracking"
cd "${_old_PWD}"
EOF
local _exit_status="${?}"
printf '## %s finished\n' "${FUNCNAME[0]}"
return "${_exit_status}"
}
Note, .bash_aliases is arbitrary as far as file naming, well so long as one is consistent, it could even be more explicit via something like .gems_aliases; end-users need not know what happens behind the curtains to make this magic happen in other words.
... which'll hopefully show one clear method of causing gem install someThing and related commands to search the user's installed packages first. Though in case another example is needed...
~/git-shell-commands/jekyll-init
#!/usr/bin/env bash
__SOURCE__="${BASH_SOURCE[0]}"
while [[ -h "${__SOURCE__}" ]]; do
__SOURCE__="$(find "${__SOURCE__}" -type l -ls | sed -n 's#^.* -> \(.*\)#\1#p')"
done
__DIR__="$(cd -P "$(dirname "${__SOURCE__}")" && pwd)"
__NAME__="${__SOURCE__##*/}"
__AUTHOR__='S0AndS0'
__DESCRIPTION__='Initializes new Git repository with a gh-pages branch'
## Provides 'failure'
# source "${__DIR__}/shared_functions/failure"
# trap 'failure "LINENO" "BASH_LINENO" "${BASH_COMMAND}" "${?}"' ERR
## Provides: argument_parser <arg-array-reference> <acceptable-arg-reference>
source "${__DIR__}/shared_functions/arg_parser"
## Provides: git_add_commit <string>
source "${__DIR__}/shared_functions/git_shortcuts"
## Provides: __license__ <description> <author>
source "${__DIR__}/shared_functions/license"
usage(){
_message="${1}"
_repo_name="${_repo_name:-repository-name}"
cat <<EOF
## Usage
# ssh ${USER}#host-or-ip ${__NAME__} ${_git_args[#]:-$_repo_name}
#
# ${__DESCRIPTION__}
#
# --quite
# Git initializes quietly
#
# --shared
# Allow git push for group $(groups | awk '{print $1}')
#
# --template=<path>
# Template git repository that git init should pull from
#
# ${_repo_name}
# Name of repository to internalize or add Jekyll gh-pages branch to
#
## For detailed documentation of the above options.
## See: git help init
#
# --clean
# Remove non-git related files and directories from gh-pages branch prior to
# initializing Jekyll related files. This allows for files from previous branch
# to remain separate from files being tracked on the gh-pages branch.
#
# -l --license
# Shows script or project license then exits
#
# -h --help help
# Displays this message and exits
#
## The following options maybe used to modify the generated _config.yml file
#
# --title ${_title}
# --email ${_email}
# --twitter-username ${_twitter_username}
# --github-username ${_github_username}
EOF
if [ -n "${_message}" ] && [[ "${_message}" != '0' ]]; then
printf 'Error - %s\n' "${_message}" >&2
fi
}
_args=("${#:?# No arguments provided try: ${__NAME__} help}")
_valid_args=('--help|-h|help:bool'
'--license|-l|license:bool'
'--quiet:bool'
'--clean:bool'
'--shared:bool'
'--template:path'
'--title:print'
'--email:print'
'--twitter-username:posix'
'--github-username:posix'
'--repo-name:posix-nil')
argument_parser '_args' '_valid_args'
_exit_status="$?"
_git_args=()
if ((_quiet)); then _git_args+=('--quiet'); fi
if ((_shared)); then _git_args+=('--shared'); fi
if [ -n "${_template}" ]; then _git_args+=("--template='${_template}'"); fi
if [ -n "${_repo_name}" ]; then _git_args+=("${_repo_name}"); fi
## Set defaults for some variables if not already set
_github_username="${_github_username:-$USER}"
if [ -z "${_title}" ]; then
for _word in ${_repo_name//[-_]/ }; do
if [[ "${#_word}" -ge '4' ]]; then
_temp_title+=("${_word^}")
else
_temp_title+=("${_word}")
fi
done
_title="${_temp_title[#]}"
fi
_bundle_path="${HOME}/.bundle/install"
if ((_help)) || ((_exit_status)); then
usage "${_exit_status}"
exit "${_exit_status}"
elif ((_license)); then
__license__ "${__DESCRIPTION__}" "${__AUTHOR__}"
exit 0
fi
if [ -z "${_repo_name}" ]; then
usage 'missing repository name argument!'
exit "1"
fi
_git_path="${HOME}/git/${_repo_name:?No repository name provided}"
_old_PWD="${PWD}"
if [ -d "${_git_path}" ]; then cd "${_git_path}"; fi
_git_dir="$(git rev-parse --git-dir 2>/dev/null)"
if [[ "${_git_path}/${_git_dir}" == "${_git_path}/.git" ]]; then
printf '# Skipping git init, path already tracked by git: %s\n' "${_git_preexisting_dir}"
elif [[ "${_git_path}/${_git_dir}" == "${_git_path}/." ]]; then
echo '# Bare git repository detected, cannot install Jekyll to that right now'
exit 1
else
if [ -e "${HOME}/git-shell-commands/git-init" ]; then
"${HOME}/git-shell-commands/git-init" ${_git_args[#]}
else
cd "${HOME}/git" || exit 1
git init ${_git_args[#]}
fi
fi
cd "${_git_path}" || exit 1
_git_branches="$(git branch --list)"
_orig_branch="$(awk '/\*/{print $2}' <<<"${_git_branches}")"
_pages_branch="$(awk '/gh-pages/{print $2}' <<<"${_git_branches}")"
if [ -n "${_pages_branch}" ]; then
printf '# There is already a pages branch %s for repository %s\n' "${_pages_branch}" "${_repo_name}"
exit 1
fi
git_add_commit "Added files on ${_orig_branch} prior to installing Bundler & Jekyll to gh-pages branch"
git checkout -b gh-pages
if [[ "$(git config receive.denyCurrentBranch)" != 'updateInstead' ]]; then
git config receive.denyCurrentBranch updateInstead
fi
if ((_clean)); then
for _path in ${_git_path}/*; do
case "${_path}" in
*'.git') [[ -d "${_path}" ]] && continue ;;
*'.gitignore') [[ -f "${_path}" ]] && continue ;;
esac
git rm -rf "${_path}"
done
git_add_commit 'Cleaned gh-pages branch of files from parent branch'
fi
modify_config_yml(){
if ! [ -f "${_git_path}/_config.yml" ]; then
printf 'Error - no Jekyll config file found under %s\n' "${_git_path}" >&2
return 1
fi
if [ -n "${_title}" ]; then
sed -i "/title:/ { s#:[a-zA-Z 0-9]*#: ${_title}#; }" "${_git_path}/_config.yml"
fi
if [ -n "${_email}" ]; then
sed -i "/email:/ { s#:[a-zA-Z 0-9]*#: ${_email}#; }" "${_git_path}/_config.yml"
fi
if [ -n "${_twitter_username}" ]; then
sed -i "/_twitter_username:/ { s#:[a-zA-Z 0-9]*#: ${_twitter_username}#; }" "${_git_path}/_config.yml"
fi
if [ -n "${_github_username}" ]; then
sed -i "/github_username:/ { s#:[a-zA-Z 0-9]*#: ${_github_username}#; }" "${_git_path}/_config.yml"
fi
if [[ "${_repo_name}" != "${_github_username}" ]]; then
tee -a "${_git_path}/_config_baseurl.yml" 1>/dev/null <<EOF
# Use base URL to simulate GitHub pages behaviour
baseurl: "${_repo_name}"
EOF
fi
}
source "${HOME}/.bash_aliases"
bundle init || exit "${?}"
bundle install --path "${_bundle_path}"
bundle add jekyll
bundle exec jekyll new --force --skip-bundle "${_git_path}"
modify_config_yml
bundle install
cat >> "${_git_path}/.gitignore" <<EOF
# Ignore files and folders generated by Bundler
Bundler
vendor
.bundle
Gemfile.lock
EOF
git_add_commit 'Added files from Bundler & Jekyll to git tracking'
[[ "${_old_PWD}" == "${_git_path}" ]] || cd "${_old_PWD}"
printf '# Clone %s via: git clone %s#domain_or_ip:%s\n' "${_repo_name}" "${USER}" "${_git_path//${HOME}\//}"
printf '# %s finished\n' "${__NAME__}"
... which also shows how to bundle install someThing to somewhere.
Good luck with the publishing and perhaps comment if ya get stuck.

svn checkout to deploy via shell

I have the following problem. I need to organize automatic upload to deploy server from svn repository, but with some feautures.
There is how I wrote it:
# $1 - project; $2 - version (optional)
# rm -rf $projectDir
if [ "$2" == '' ]; then
svn export $trunk $projectDir --force >> $log
version=`svn info $trunk | grep Revision | awk '{print$2}'`
svn copy $trunk $tags/$version -m "created while uploading last version of $1"
echo "New stable version #$version of $1 is created
Uploading to last version is completed successfully"
else
version=$2
svn export $tags/$version/ $projectDir --force >> $log
echo "Revert to version #$version is completed successfully"
fi
echo $version > $projectDir/version
chown -R $1:$1 $projectDir
But svn export doesn't delete deleted via svn files, so I need to clean directory before export every time. It's not good.
Before this, I work with checkout for deploy like this:
svn co $trunk >> $log
cp -ruf trunk/* $projectDir
svn info $trunk | grep Revision > $projectDir/version
chown -R $project:$project $projectDir
echo "uploading finished"
This work very well and very very faster (it changes only changed files) than the export, but:
without automatic tag creating;
without opportunity for nice reverting.
In my last script co doesn't work, because it trying to checkout in one directory from different repository directories (trunk/some tag), which isn't real.
So, question:
Can I relocate project before checkout?
Can I find the diff with co version and existing version before export?
What can I do with diff result? (remove unneeded files after export?)
Thanks in advance.
Have you evaluated Capistrano? It can do a lot of what you're trying to achieve.
For the basis for the solution was taken following code:
It's simpler and fully solves the problem as for me.
if [ "$2" == '' ]; then
version=`svn info ${trunk} | grep Revision | awk '{print$2}'`
if [ `cat ${projectWWW}/version` == "${version}" ]; then
resultMessage="Project is up to date"
else
svn co ${trunk} ${projectRoot}/co >> ${log}
cp -ruf ${projectRoot}/co/ ${projectRoot}/releases/${version}
chown -R $1:$1 ${projectRoot}/releases/${version}
resultMessage="New stable version #$version of $1 is created
Uploading to last version is completed successfully"
fi
else
version=$2
resultMessage="Revert to version #$version is completed successfully"
fi
ln -s ${projectRoot}/releases/${version} ${projectWWW}
echo ${version} > ${projectWWW}/version
echo ${resultMessage} >> ${log}

App Engine: Launching a script upon update/run

I'm working with App Engine and I'm thinking about using the LESS CSS extension in my next project. There's no good LESS CSS library written in Python so I went on with the original Ruby one which works great and out of the box. I'd like App Engine to execute lessc ./templates/css/style.less before running the development server and before uploading the files to the cloud. What is the best way to automate this? I'm thinking:
#run.sh:
lessc ./templates/css/style.less
.gae/dev_appserver.py --use_sqlite .
And
#deploy.sh
lessc ./templates/css/style.less
.gae/appcfg.py update .
Am I on the correct path or is there a more elegant way of doing things, perhaps at the appcfg.py level?
Thanks.
One option is to use the javascript version of Less and hence do the less-to-css conversion in the browser.. simply upload your less formatted file (see http://lesscss.org/ for details).
Alternately, I do the conversion (first with less, now I use sass) in a deploy script which does a number of things
checks that my source code control has no outstanding files checked out (uncommited changes)
joins and minifies my .js code (and runs jslint over it) into a single file
generates other content (including stamping the source code control version as a version number into certain key files and as a parameter on some files to avoid caching issues) so my main page pulls in scripts with URLs such as "allmysource.js?v=585".. the file might be static but the added params force cache invalidation
calls appcfg to perform the upload and checks the return code
makes some calls to the real site with wget to check the previously generated files are actually returned, by checking they're stamped with the expected version
applies another source code control tag to say that the intended version was successfully deployed
My script also accepts a "-preview" flag in which case it doesn't actually do the upload, but reports the version control comments for what's changed since the previous deployment.
me#here $ ./deploy -preview
Deployment preview...
Would deploy v596 to the production site (currently v593, previously v587)
594 Fix blah blah blah for X Y Z
595 New feature nah nah nah
596 Update help pages
This is pretty handy as a reminder of what I need to put in things like a changelog
I plan to also expand it so that I can, as part of my source code control, add any code that needs running once only when deployed (eg database schema changes) and know that it'll be automatically run when I next deploy a new version.
Essence of the script below as people asked... it doesn't show my "check code, generate, join, and minify" as that's another script... I realise that the original question was asking about that step of course :) but you can see where you'd add the call to generate CSS etc
#!/bin/sh
function abort () {
echo
echo "ERROR: $1"
echo "$2"
exit 99
}
function warn () {
echo
echo "WARNING: $1"
echo "$2"
}
# Overrides the Gentoo eselect mechanism to force the python version the GAE scripts expect
export EPYTHON=python2.5
# names of tags used to label bzr versions
CURR_DTAG=deployed
PREV_DTAG=prevDeployed
# command line options
PREVIEW=0
IGNORE_BZR=0
# These next few vars are set to values to identify my site, insert your own values here...
APPID=your_gae_appid_here
ADMIN_EMAIL=your_admin_email_address_here
SRCDIR=directory_to_deploy
CHECK_URL=url_of_page_to_retrive_that_does_upload_initialisation
for ARG; do
if [[ "$ARG" == "-preview" ]]; then
echo "Deployment preview..."
PREVIEW=1
fi
if [[ "$ARG" == "-force" ]]; then
echo "Ignoring the fact some files may not be committed to bzr..."
IGNORE_BZR=1
fi
done
echo
# check bzr for uncommited changed
BSTATUS=`bzr status`
if [[ "$BSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are uncommited changes - commit/revert/ignore all files before deploying" "$BSTATUS"
else
warn "There are uncommited changes" "$BSTATUS"
fi
fi
# get version of numbers of last deployed etc
currver=`bzr log -l1 --line | sed -e 's/: .*//'`
lastver=`bzr log -rtag:${CURR_DTAG} --line | sed -e 's/: .*//'`
prevver=`bzr log -rtag:${PREV_DTAG} --line | sed -e 's/: .*//'`
lastlog=`bzr log -l 1 --line gae/changelog | sed -e 's/: .*//'`
RELEASE_NOTES=`bzr log --short --forward -r $lastver..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastver "`
LOG_NOTES=`bzr log --short --forward -r $lastlog..$currver \
| perl -ne '$ver = $1 if /^ {0,4}(\d+) /; print " $ver $_" if ($ver and /^ {5,}\w/)' \
| grep -v "^ *$lastlog "`
# Crude but old habit - BUGBUGBUG is a marker in the code for things to be fixed before deployment
echo "Checking code for outstanding issues before deployment"
BUGSTATUS=`grep BUGBUGBUG js/*js`
if [[ "$BUGSTATUS" != "" ]]; then
if [[ "$IGNORE_BZR" == "0" ]]; then
abort "There are outstanding BUGBUGBUGs - fix them before deploying" "$BUGSTATUS"
else
warn "There are outstanding BUGBUGBUGs" "$BUGSTATUS"
fi
fi
echo
echo "Deploy v$currver to the production site (currently v$lastver, previously v$prevver)"
echo "$RELEASE_NOTES"
echo
if [[ "$currver" -gt "$lastlog" && "$lastver" -ne "$lastlog" ]]; then
echo "Changes since the changelog was last updated"
echo "$LOG_NOTES"
echo
fi
if [[ "$IGNORE_BZR" == "0" && $lastver -ge $currver ]]; then
abort "There don't appear to be any changes to deploy..."
fi
if [[ "$PREVIEW" == "1" ]]; then
exit 0
fi
$EPYTHON -c "import ssl" \
|| abort "$EPYTHON can't find ssl module for $EPYTHON - download it from pypi and install with the inbuilt setup.py"
# REMOVED - call to my script that calls jslint, generates files and compresses JS etc
# || abort "Generation of code failed"
/opt/google_appengine/appcfg.py --email=$ADMIN_EMAIL -v -A $APPID update $SRCDIR \
|| abort "Appcfg failed - upload presumably incomplete"
# move the tags to show we deployed properly
bzr tag -r $lastver --force ${PREV_DTAG}
bzr tag -r $currver --force ${CURR_DTAG}
echo
echo "Production site updated from v$lastver to v$currver (in turn from v$prevver)"
echo
echo "Now visiting $CHECK_URL to upload the source to the database"
# new version doesn't seem to always be there (may be caching by the webserver etc) to be uploaded into the database.. try again just in case
for cb in $RANDOM $RANDOM $RANDOM $RANDOM ; do
prodver=`wget $CHECK_URL?_cb=$cb -q -O - | perl -ne 'print $1 if /^\s*Rev #(\d+)\s*$/'`
if [[ "$currver" == "$prodver" ]]; then
echo "OK: New version $prodver successfully deployed"
exit 0
fi
echo "Retrying the upload of source to the database"
sleep 5
done
abort "The new source doesn't seem to be loading into the database" "Try 'wget $CHECK_URL?_cb=$RANDOM -q -O -'"
It's not particularly big or clever, but it automates the upload job

Resources