Using github actions to publish documentation - python-sphinx

What I considered:
github offers github pages to host documentation in either a folder on my master branch or a dedicated gh-pages branch, but that would mean to commit build artifacts
I can also let readthedocs build and host docs for me through webhooks, but that means learning how to configure Yet Another Tool at a point in time where I try to consolidate everything related to my project in github-actions
I already have a docu-building process that works for me (using sphinx as the builder) and that I can also test locally, so I'd rather just leverage that instead. It has all the rules set up and drops some static html in an artifact - it just doesn't get served anywhere. Handling it in the workflow, where all the other deployment configuration of my project is living, feels better than scattering it over different tools or github specific options.
Is there already an action in the marketplace that allows me to do something like this?
name: CI
on: [push]
jobs:
... # do stuff like building my-project-v1.2.3.whl, testing, etc.
release_docs:
steps:
- uses: actions/sphinx-to-pages#v1 # I wish this existed
with:
dependencies:
- some-sphinx-extension
- dist/my-project*.whl
apidoc_args:
- "--no-toc"
- "--module-first"
- "-o docs/autodoc"
- "src/my-project"
build-args:
- "docs"
- "public" # the content of this folder will then be served at
# https://my_gh_name.github.io/my_project/
In other words, I'd like to still have control over how the build happens and where artifacts are dropped, but do not want to need to handle the interaction with readthedocs or github-pages.
###Actions that I tried
❌ deploy-to-github-pages: runs the docs build in an npm container - will be inconvenient to make it work with python and sphinx
❌ gh-pages-for-github-action: no documentation
❌ gh-pages-deploy: seems to target host envs like jekyll instead of static content, and correct usage with yml syntax not yet documented - I tried a little and couldn't get it to work
❌ github-pages-deploy: looks good, but correct usage with yml syntax not yet documented
✅ github-pages: needs a custom PAT in order to trigger rebuilds (which is inconvenient) and uploads broken html (which is bad, but might be my fault)
✅ deploy-action-for-github-pages: also works, and looks a little cleaner in the logs. Same limitations as the upper solution though, it needs a PAT and the served html is still broken.
The eleven other results when searching for github+pages on the action marketplace all look like they want to use their own builder, which sadly never happens to be sphinx.

In the case of managing sphinx using pip (requirements.txt), pipenv, or poetry, we can deploy our documentation to GitHub Pages as follows. For also other Python-based Static Site Generators like pelican and MkDocs, the workflow works as same. Here is a simple example of MkDocs. We just add the workflow as .github/workflows/gh-pages.yml
For more options, see the latest README: peaceiris/actions-gh-pages: GitHub Actions for GitHub Pages 🚀 Deploy static files and publish your site easily. Static-Site-Generators-friendly.
name: github pages
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Upgrade pip
run: |
# install pip=>20.1 to use "pip cache dir"
python3 -m pip install --upgrade pip
- name: Get pip cache dir
id: pip-cache
run: echo "::set-output name=dir::$(pip cache dir)"
- name: Cache dependencies
uses: actions/cache#v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: python3 -m pip install -r ./requirements.txt
- run: mkdocs build
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site

I got it to work, but there is no dedicated action to build and host sphinx docs on either github pages or readthedocs as of yet, so as far as I am concerned there is quite a bit left to be desired here.
This is my current release_sphinx job that uses the deploy-action-for-github-pages action and uploads to github-pages:
release_sphinx:
needs: [build]
runs-on: ubuntu-latest
container:
image: python:3.6
volumes:
- dist:dist
- public:public
steps:
# check out sources that will be used for autodocs, plus readme
- uses: actions/checkout#v1
# download wheel that was build and uploaded in the build step
- uses: actions/download-artifact#v1
with:
name: distributions
path: dist
# didn't need to change anything here, but had to add sphinx.ext.githubpages
# to my conf.py extensions list. that fixes the broken uploads
- name: Building documentation
run: |
pip install dist/*.whl
pip install sphinx Pallets-Sphinx-Themes
sphinx-apidoc --no-toc --module-first -o docs/autodoc src/stenotype
sphinx-build docs public -b dirhtml
# still need to build and set the PAT to get a rebuild on the pages job,
# apart from that quite clean and nice
- name: github pages deploy
uses: peaceiris/actions-gh-pages#v2.3.1
env:
PERSONAL_TOKEN: ${{ secrets.PAT }}
PUBLISH_BRANCH: gh-pages
PUBLISH_DIR: public
# since gh-pages has a history, this step might no longer be necessary.
- uses: actions/upload-artifact#v1
with:
name: documentation
path: public
Shoutout to the deploy action's maintainer, who resolved the upload problem within 8 minutes of me posting it as an issue.

Related

How to use dawidd6/action-download-artifact with pull_request trigger

This is a question for the github workflow action dawidd6/action-download-artifact.
There is no discussion board in https://github.com/dawidd6/action-download-artifact, so asking this question in this forum.
This is how I wish to use this workflow in my GitHub repo:
A pull request is created.
This triggers an workflow – lets call it the “build workflow” - to build the entire repo and uploads the build artifacts.
Then another workflow – lets call it the “test workflow” - should start, that should download the build artifact using action-download-artifact and run some other actions.
Now if I put the trigger for the “test workflow” as pull_request, then how can I make it wait for the corresponding “build workflow” to complete? Do I specify the run_id ?
For now I am using “workflow_run” as the trigger for the run WF. But then when a PR is created, it does not show the “test workflow” as one of the checks for the PR. Can you help me figure out the correct way of using the download-artifact action that would help for my purpose?
You could write two workflows where the first builds when the pull request is opened or edited, and the second executes the test when the pull request is closed and merged. The HEAD commit SHA could be used to identify the artifact name between the two workflows.
I'm going to reword your requirements slightly.
Build everything and upload the artifacts when a pull request is opened or edited (e.g. new commits added).
Download the artifact and test it when a pull request is closed and merged.
Here are two sample workflows that would accomplish that. You will need to create a token to share the artifacts between workflows (see secrets.GITHUB_TOKEN below).
Build.yml
name: Build
on:
pull_request:
jobs:
Build:
steps:
- name: Environment Variables
shell: bash
run: |
ARTIFACTS_SHA=$(git rev-parse HEAD)
BUILD_ARTIFACTS=BuildArtifacts_${ARTIFACTS_SHA}
echo "ARTIFACTS_DIR=$ARTIFACTS_DIR" >> $GITHUB_ENV
- name: Build
run: make
- name: Capture Artifacts
uses: actions/upload-artifact#2
with:
name: Artifacts_${{ env.ARTIFACTS_SHA }}
path: path/to/artifact/
Test.yml
name: Test
on:
pull_request:
types: [closed]
jobs:
Test:
steps:
- name: Environment Variables
shell: bash
run: |
ARTIFACTS_SHA=$(git rev-parse HEAD)
BUILD_ARTIFACTS=BuildArtifacts_${ARTIFACTS_SHA}
echo "ARTIFACTS_DIR=$ARTIFACTS_DIR" >> $GITHUB_ENV
- name: Download Artifacts
uses: dawidd6/action-download-artifact#v2
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
workflow: Build.yml
name: ${{ env.BUILD_ARTIFACTS }}

Where should caching occur in a GitHub Action?

What is the correct placement of caching in a GitHub Action? Specifically is in correct to place it before or after running setup of tools using another Action?
For example if I'm using something like haskell/actions/setup should my use of actions/cache precede or follow that? Put another way: if setup subsequently installs updated components on a future run of my Action, will the corresponding parts of the cache be invalidated?
The cache action should be placed before any step that consumes or creates that cache. This step is responsible for:
defining cache parameters.
restoring the cache, if it was cached in the past.
GitHub Actions will then run a "Post *" step after all the steps, which will store the cache for future calls.
See the example workflow from the documentation.
For example, consider this sample workflow:
name: Caching Test
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Enable Cache
id: cache-action
uses: actions/cache#v2
with:
path: cache-folder
key: ${{ runner.os }}-cache-key
- name: Use or generate the cache
if: steps.cache-action.outputs.cache-hit != 'true'
run: mkdir cache-folder && touch cache-folder/hello
- name: Verify we have our cached file
run: ls cache-folder
This is how it looks on the first run:
And this is how it looks on the second run:
GitHub will not invalidate the cache. Instead, it is the responsibility of the developer to ensure that the cache key is unique to the content it represents.
On common way to do this, is to set the cache key so that it contains a hash of a file that lives in the repository, so that changes to this file, will yield a different cache key. A good example for such a behavior, is when you have lock files that list all your repository's dependencies (requirements.txt for pyrhon, Gemfile.lock for ruby, etc).
This is achieved by a syntax similar to this:
key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }}
as described in the Creating a cache key section of the documentation.

Pulumi with GitHub Actions crashing parallel workflows with error: [409] Conflict: Another update is currently in progress. (e.g. with renovate)

Using GitHub Actions with Pulumi is a great experience because of the good Actions provided. But I tend to run into problems, where multiple GitHub Action workflows run in parallel (e.g. when renovate is configured and tries to update the repositories dependencies). So either the first workflow wins and does it's job - and the others fail. Or every workflow fails (which also depends on the GitHub Action workflow design). Then I get errors like this (see the full log here):
#### :tropical_drink: `pulumi --non-interactive up`
Previewing update (dev)
View Live: https://app.pulumi.com/jonashackt/scmbreakoutpulumi/dev/previews/fbf45825-5d8f-45bc-ad3e-c55b7576313e
pulumi:pulumi:Stack scmbreakoutpulumi-dev running
azure:core:ResourceGroup scm-breakout-rg-pulumi
azure:storage:Account scmbreakresources
azure:appservice:Plan asp-scmbreakoutrg
azure:storage:Container rawimages
azure:storage:Queue thumbnails
azure:storage:Container thumbnails
+ azure:appservice:AppService scmContactsApi create
+ azure:appservice:AppService scmResourceApi create
+ azure:appservice:FunctionApp scmFunctionApp create
+ azure:appservice:Slot scmResourceApiStg create
pulumi:pulumi:Stack scmbreakoutpulumi-dev
Resources:
+ 4 to create
7 unchanged
Updating (dev)
error: [409] Conflict: Another update is currently in progress.
To learn more about possible reasons and resolution, visit https://www.pulumi.com/docs/troubleshooting/#conflict
The log already leads to a good resource: https://www.pulumi.com/docs/troubleshooting/#conflict It's actually a feature of the Pulumi state management provided by app.pulumi.com:
One of the services that pulumi.com provides is concurrency control.
The service will allow at most one user to update a particular stack
at a time.
So using only one Stack like the default dev at app.pulumi.com it looks like this:
Using GitHub Actions or other CI/CD platforms, this becomes an obstacle. I see 2 options here: We could either switch to another Pulumi state management backend (like the Local Filesystem Backend, which would not create a stack on app.pulumi.com but rather CI locally). Or we could create a GitHub Action job specific stack on app.pulumi.com, where the stack is named after the specific job id or something.
As I don't mind to use app.pulumi.com here - and also use the additional log, if something did go wrong - I wanted to have a solution for the second option. The GitHub Action workflow file design could be described with the following steps:
Standard Pulumi GitHub Action pipeline: Defining needed variables, checking out the repo, setting up the nodejs environment incl. installing the npm dependencies - and finally configuring the Pulumi CLI using the action-install-pulumi-cli Action.
Creating a Pulumi stack on app.pulumi.com using pulumi stack init github-${{ github.run_id }}, which resembles the github.run_id GitHub Action default context variable. This variables represents "a unique number for each run within a repository."
Leveraging the (or multiple) pulumi/actions#v2 Action in version v2 (since only from v2 we have the stack-name configuration option) and configuring the Pulumi app.pulumi.com stack name with stack-name: github-${{ github.run_id }}
Removing the Pulumi app.pulumi.com stack using a final pulumi stack rm github-${{ github.run_id }} -y
The full GitHub Action workflow looks like this:
name: pulumi-preview-up
on: [push]
env:
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
jobs:
preview-up-destroy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: In order to use the Pulumi v2 action, we need to setup the Pulumi project specific language environment
uses: actions/setup-node#v2
with:
node-version: '14'
- name: After setting up the Pulumi project specific language environment, we need to install the dependencies also (see https://github.com/pulumi/actions#example-workflows)
run: npm install
- name: Install Pulumi CLI so that we can create a GHA pipeline specific Pulumi Stack
uses: pulumi/action-install-pulumi-cli#v1.0.1
- name: Create GHA pipeline specific Pulumi Stack incl. Azure location
run: |
pulumi stack init github-${{ github.run_id }}
pulumi config set azure:location WestEurope
- uses: pulumi/actions#v2
with:
command: preview
stack-name: github-${{ github.run_id }}
- uses: pulumi/actions#v2
with:
command: up
stack-name: github-${{ github.run_id }}
- uses: pulumi/actions#v2
with:
command: destroy
stack-name: github-${{ github.run_id }}
- name: Remove the GHA pipeline specific Pulumi Stack
run: |
pulumi stack rm github-${{ github.run_id }} -y
Now also the app.pulumi.com overview looks like this when running multiple GitHub Actions workflows in parallel:

How to reuse a strategy matrix across several jobs in Github workflows

I would like to avoid repeating a strategy matrix across jobs:
jobs:
build-sdk:
runs-on: macOS-latest
strategy:
fail-fast: false
matrix:
qt-version: ['5.15.1']
ios-deployment-architecture: ['arm64', 'x86_64']
ios-deployment-target: '12.0'
steps:
…
create-release:
needs: build-sdk
runs-on: macOS-latest
steps:
…
publish-sdk:
needs: [build-sdk, create-release]
runs-on: macOS-latest
strategy:
fail-fast: false
matrix: ?????
steps:
…
Is this possible (without creating a job to create the matrix as JSON itself)?
There's an action that allows uploading multiple assets to the same release from a matrix build that's triggered on push to a tag. Someone filed an issue about this specific use-case, and the action's author responded with
Assets are uploaded for the GitHub release associated with the same tag so as long as the this action is run in a workflow run for the same tag all assets should get added to the same GitHub release.
This suggests that a workflow like this would probably meet your needs:
on:
push:
tags:
- 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
release:
runs-on: macOS-latest
strategy:
fail-fast: false
matrix:
qt-version: ['5.15.1']
ios-deployment-architecture: ['arm64', 'x86_64']
ios-deployment-target: '12.0'
steps:
- name: build SDK
run: ...
- name: Create Release
uses: softprops/action-gh-release#v1
with:
files: |
- "SDK_file1" # created in previous build step
- "SDK_file2"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPOSITORY: username/reponame
- name: publish SDK
run: ...
I've simplified what you would need to do, but I'm guessing you might be wanting to upload assets with names reflecting their applicable matrix options. For that detail, I recommend adding an explicit step in your job to create the asset's filename and add it to the job environment, somewhat similar to what I've done here:
- name: Name asset
run: |
BINARY_NAME=sdk-qt${{matrix.qt-version}}-iOS${{matrix.ios-deployment-target}}-${{matrix.ios-deployment-architecture}}
echo "BINARY_NAME=$BINARY_NAME" >> $GITHUB_ENV
Then, when your build step generates your assets, you can name them with the filename in ${{env.BINARY_NAME}}, and pass that same name to the release creation step like I've done in my asset release step here.

How can we use environment variables in a Jekyll config file?

Is there a way I can use one my bash environment variable (say $FOO) in my Jekyll's _config.yml file?
I've tried using:
foo = <%= ENV['FOO'] %>
But it didn't work as the Ruby code wasn't interpreted.
Versions used:
Ruby: 2.1.2
Jekyll: 2.5.3
If your goal is to use environment variables as liquid items {{ site.something }}, you might be able to get this thing in your Gemfile a go:
gem 'jekyll-environment-variables', group: :jekyll_plugins
And then you'll be able to use {{ site.env.HOME }} and expect it be converted to something like /home/ubuntu in the output HTML.
Disclosure: I am the owner of the gem and I've been using it personally since long ago.
The answer by #elryco is close but not quite right, at least for my setup. It took some trial and error, but this finally worked. Note this only works for certain env vars supported by the contentful plugin.
Note that you need the gem jekyll-contentful-data-import (v1.7.0 or up) for this solution to actually work.
Bash environment (e.g., ~/.bash_profile):
export CONTENTFUL_ACCESS_TOKEN=foo
export CONTENTFUL_SPACE_ID=bar
In _config.yml, reference them as:
contentful:
spaces:
- example:
space: ENV_CONTENTFUL_SPACE_ID
access_token: ENV_CONTENTFUL_ACCESS_TOKEN
This is the same as what's written in the Github documentation.
I recently had to try and do this myself. It turns out you can't put environment variables directly into a Jekyll config file, but you can write a rake task that will take environment variables and apply them to your config.
Here's an example:
# Rakefile
require 'jekyll'
task default: %w[build]
desc "Build the site"
task :build do
config = Jekyll.configuration({
url: ENV["SITE_URL"],
})
site = Jekyll::Site.new(config)
Jekyll::Commands::Build.build(site, config)
end
Unfortunately there is no direct way of accessing it in liquid tags, At Least not officially.
But I wrote a wrapper script which reads environment variables before jekyll starts and appends it to _config.yml file and deletes the variable post build.
echo "secret-variable: $PASSWORD" >> _config.yml
bundle exec jekyll build -d target
sed '$d' _config.yml //this is to delete the last line
Now I'm free to use site.secret-variable anywhere in the liquid tags.
I know that this not the right way of doing it, But so is writing a custom ruby script.
I personally find the use of a ruby Jekyll plugin more appropriate and portable. There's a very simple yet effective solution available here.
The main idea is ruby will have access to the ENV variables so you can use a small ruby plugin to load into your site.config liquid array all the information you want from the environment. And you can define default values as well.
Please note that the example given in the link isn't the most relevant since the prod/staging environment is already offered by Jekyll natively with the build command options.
It is now possible to use bash environment variable (say $FOO) in Jekyll's _config.yml file with GitHub Actions:
# _config.yml
title: FOO
Create a bash script say sample.sh to replace for a given input string FOO and replace with another string
# github/workflows/sample.sh
export FOO=XYZ
while IFS='' read -r a; do
echo "${a//FOO/$FOO}"
done < /_config.yml > /_config.yml.t
mv /_config.yml{.t,}
Create a workflow file, say github-pages.yml, put the script before Build with Jekyll:
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll with GitHub Pages dependencies preinstalled
on:
# Runs on pushes targeting the default branch
push:
branches:
- 'master'
- 'mybranch'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Setup Pages
uses: actions/configure-pages#v2
- name: Utilize FOO
run: |
bash .github/workflows/sample.sh
- name: Build with Jekyll
uses: actions/jekyll-build-pages#v1
with:
source: ./
destination: ./_site
- name: Upload artifact
uses: actions/upload-pages-artifact#v1
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages#v1
If your bash environment variables are declared like this
export ENV_ACCESS_TOKEN=xxxxx
export ENV_SPACE_ID=yyyyyy
You can get it like this in your config.yml
space: ENV_SPACE_ID # Required
access_token: ENV_ACCESS_TOKEN # Required

Resources