Is there a way I can use one my bash environment variable (say $FOO) in my Jekyll's _config.yml file?
I've tried using:
foo = <%= ENV['FOO'] %>
But it didn't work as the Ruby code wasn't interpreted.
Versions used:
Ruby: 2.1.2
Jekyll: 2.5.3
If your goal is to use environment variables as liquid items {{ site.something }}, you might be able to get this thing in your Gemfile a go:
gem 'jekyll-environment-variables', group: :jekyll_plugins
And then you'll be able to use {{ site.env.HOME }} and expect it be converted to something like /home/ubuntu in the output HTML.
Disclosure: I am the owner of the gem and I've been using it personally since long ago.
The answer by #elryco is close but not quite right, at least for my setup. It took some trial and error, but this finally worked. Note this only works for certain env vars supported by the contentful plugin.
Note that you need the gem jekyll-contentful-data-import (v1.7.0 or up) for this solution to actually work.
Bash environment (e.g., ~/.bash_profile):
export CONTENTFUL_ACCESS_TOKEN=foo
export CONTENTFUL_SPACE_ID=bar
In _config.yml, reference them as:
contentful:
spaces:
- example:
space: ENV_CONTENTFUL_SPACE_ID
access_token: ENV_CONTENTFUL_ACCESS_TOKEN
This is the same as what's written in the Github documentation.
I recently had to try and do this myself. It turns out you can't put environment variables directly into a Jekyll config file, but you can write a rake task that will take environment variables and apply them to your config.
Here's an example:
# Rakefile
require 'jekyll'
task default: %w[build]
desc "Build the site"
task :build do
config = Jekyll.configuration({
url: ENV["SITE_URL"],
})
site = Jekyll::Site.new(config)
Jekyll::Commands::Build.build(site, config)
end
Unfortunately there is no direct way of accessing it in liquid tags, At Least not officially.
But I wrote a wrapper script which reads environment variables before jekyll starts and appends it to _config.yml file and deletes the variable post build.
echo "secret-variable: $PASSWORD" >> _config.yml
bundle exec jekyll build -d target
sed '$d' _config.yml //this is to delete the last line
Now I'm free to use site.secret-variable anywhere in the liquid tags.
I know that this not the right way of doing it, But so is writing a custom ruby script.
I personally find the use of a ruby Jekyll plugin more appropriate and portable. There's a very simple yet effective solution available here.
The main idea is ruby will have access to the ENV variables so you can use a small ruby plugin to load into your site.config liquid array all the information you want from the environment. And you can define default values as well.
Please note that the example given in the link isn't the most relevant since the prod/staging environment is already offered by Jekyll natively with the build command options.
It is now possible to use bash environment variable (say $FOO) in Jekyll's _config.yml file with GitHub Actions:
# _config.yml
title: FOO
Create a bash script say sample.sh to replace for a given input string FOO and replace with another string
# github/workflows/sample.sh
export FOO=XYZ
while IFS='' read -r a; do
echo "${a//FOO/$FOO}"
done < /_config.yml > /_config.yml.t
mv /_config.yml{.t,}
Create a workflow file, say github-pages.yml, put the script before Build with Jekyll:
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll with GitHub Pages dependencies preinstalled
on:
# Runs on pushes targeting the default branch
push:
branches:
- 'master'
- 'mybranch'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Setup Pages
uses: actions/configure-pages#v2
- name: Utilize FOO
run: |
bash .github/workflows/sample.sh
- name: Build with Jekyll
uses: actions/jekyll-build-pages#v1
with:
source: ./
destination: ./_site
- name: Upload artifact
uses: actions/upload-pages-artifact#v1
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages#v1
If your bash environment variables are declared like this
export ENV_ACCESS_TOKEN=xxxxx
export ENV_SPACE_ID=yyyyyy
You can get it like this in your config.yml
space: ENV_SPACE_ID # Required
access_token: ENV_ACCESS_TOKEN # Required
Related
I have the following Github Action workflow that is intended to read our lines of code coverage from a coverage.txt file and print the coverage as a comment.
name: Report Code Coverage
on:
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node.js 12.x
uses: actions/setup-node#v1
with:
node-version: 12.x
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm run test:coverage
## I need help around here, as test:coverage generates a file and I need to get a value from a file
- uses: mshick/add-pr-comment#v1
with:
message: |
**{{Where I want the code coverage value to go}}**
🌏
!
repo-token: ${{ secrets.GITHUB_TOKEN }}
repo-token-user-login: 'github-actions[bot]' # The user.login for temporary GitHub tokens
allow-repeats: false # This is the default
Where I am struggling is on taking the output of the file, which I can obtain with bash using awk '/^Lines/ { print $2 }' < coverage.txt (not sure if there is a better alternative way) and inserting it into the message, which currently just says hello.
I found this article on yaml variables, but when I added some of that code to my own file, it just was not recognized any longer by GitHub actions (which is the only way I know to test yaml). Normally it would fail somewhere and give me an error, but after multiple attempts, the only thing that worked was to remove it.
It is quite possible that I am missing something obvious as I do not know yaml very well nor even what certain key words might be.
Alternatively, is it easier to just dump the contents of the file into the message? That could be acceptable as well.
You can create a step that gets the coverage to an output variable that then can be accessed by the next step.
Notice that for utilizing this method you will need to give the step and id and the set output variable a variable name so that you can access it in follow up steps within the same job.
Sample with your workflow below, but if you want to see a running demo here is the repo https://github.com/meroware/demo-coverage-set-output
- name: Run tests
run: npm run test:coverage
- name: Get coverage output
id: get_coverage
run: echo "::set-output name=coverage::$(awk '/^Lines/ { print $2 }' < test.txt)"
- uses: mshick/add-pr-comment#v1
with:
message: |
Coverage found ${{steps.get_coverage.outputs.coverage}}
🌏
!
repo-token: ${{ secrets.GITHUB_TOKEN }}
repo-token-user-login: 'github-actions[bot]' # The user.login for temporary GitHub tokens
allow-repeats: false # This is the default
I make a lot of github actions tutorials here if you're looking to grasp more on this topic. Cheers!!
I have many Gitlab project followed the same CI template. Whenever there is a small change in the CI script, I have to manually modify the CI script in each project. Is there a way you can store your CI script in a central location and have your project called that CI script with some environment variable substitution? For instance,
gitlab-ci.yml in each project
/bin/bash -c "$(curl -fsSL <link_to_the_central_location>.sh)"
gitlab-ci.yml in the central location
stages:
- build
- test
build-code-job:
stage: build
script:
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
test-code-job1:
stage: test
script:
- echo "If the files are built successfully, test some files with one command:"
- rake test1
test-code-job2:
stage: test
script:
- echo "If the files are built successfully, test other files with a different command:"
- rake test2
You do not need curl, actually gitlab supports this via the include directive.
you need a repository, where you store your general yml files. (you can choose if it is a whole ci file, or just parts. For this example lets call this repository CI and assume your gitlab runs at example.com - so the project url would be example.com/ci. we create two files in there just to show the possibilities.
is a whole CI definition, ready to use - lets call the file ci.yml. This approach is not really flexible
stages:
- build
- test
build-code-job:
stage: build
script:
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
test-code-job1:
stage: test
script:
- echo "If the files are built successfully, test some files with one command:"
- rake test1
test-code-job2:
stage: test
script:
- echo "If the files are built successfully, test other files with a different command:"
- rake test2
is a partly CI definition, which is more extendable. lets call the files includes.yml
.build:
stage: build
script:
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
.test:
stage: test
script:
- echo "this script tag will be overwritten"
There is even the option to use template string from yaml. please reference the gitlab documentation but it is similar to 2.
we do have our project which wants to use such definitions. so either
For the whole CI file
include:
- project: 'ci'
ref: master # think about tagging if you need it
file: 'ci.yml'
as you can see now we are referencing one yml file, with all the cahnges.
with partial extends
include:
- project: 'ci'
ref: master # think about tagging if you need it
file: 'includes.yml'
stages:
- build
- test
build-code-job:
extends: .build
job1:
extends: .test
script:
- rake test1
job2:
extends: .test
script:
- rake test2
As you see, you can easily use the includes, to have a way more granular setup. Additionally you could define at job1 and job2 variables, eg for the test target, and move the script block into the includes.yml
Futhermore you can also use anchors for the script parts. Which looks like this
includes.yml
.build-scirpt: &build
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
.build:
stage: build
script:
- *build
and you can use also the script anchor within your configuration
For a deeper explanation you can also take a look at https://docs.gitlab.com/ee/ci/yaml/includes.html
I have a Ruby script that I want to run when a pull-request is created. This pull-request validates a series of conditions to be sure the pull-request can be merged. It's a very simple script with no external gems, just standard Ruby.
I'm trying to run this script on a job on a run step. The problem is, I'm not sure the path where the file should be saved.
The script is called: validator.rb. From my local computer I can run the script using:
ruby -r ./validator.rb -e "Validator.new.validate_something 'One parameter'"
This works fine locally but when I push this to GitHub it is failing. I saved my script as .github/workflows/ruby-scripts so my job looks like this:
jobs:
title:
name: "Title"
runs-on: ubuntu-latest
steps:
- run: ruby -r ./ruby-scripts/validator.rb -e "Validator.new.validate_something '${{ github.event.pull_request.title}}'"
And I get:
Run ruby -r ./ruby-scripts/validator.rb -e "Validator.new.validate 'Create README.md'"
/usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated
/usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated
/usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:29: warning: constant Gem::ConfigMap is deprecated
/usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:30: warning: constant Gem::ConfigMap is deprecated
/usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated
/usr/local/lib/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:92:in `require': cannot load such file -- ./ruby-scripts/validator.rb (LoadError)
from /usr/local/lib/site_ruby/2.5.0/rubygems/core_ext/kernel_require.rb:92:in `require'
##[error]Process completed with exit code 1.
I tried with all the possible combination of paths and it fails each time.
Running pwd and ls returned:
- run: pwd => /home/runner/work/repo-name/repo-name
- run: ls => shell: /bin/bash -e {0}
What is the right way to do this?
As I've mentioned in the comments, the reason why your workflow isn't working is that you forgot the crucial step that checks-out your repository. By default, the workspace is empty unless you check out the repository with the Checkout GitHub Action.
From the GitHub Action's README:
This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it.
Here's an example:
- name: Checkout Repo
uses: actions/checkout#v2
(That's it, really)
(Note: You don't have to specify a name for the step.)
For me, the problem was also the location of the file. The ruby script was running in the root of the repository, not in the same path as the workflow YAML (as I was expecting).
I was running run: ruby -r ./my-file.rb, where my-file.rb was next to the workflow yaml.
I realized this by adding this step:
- run: ruby -e 'p `ls`.split("\n")'
Which printed the output of ls in a ruby array.
I fixed using:
run: ruby ./.github/workflows/my_file.rb
What I considered:
github offers github pages to host documentation in either a folder on my master branch or a dedicated gh-pages branch, but that would mean to commit build artifacts
I can also let readthedocs build and host docs for me through webhooks, but that means learning how to configure Yet Another Tool at a point in time where I try to consolidate everything related to my project in github-actions
I already have a docu-building process that works for me (using sphinx as the builder) and that I can also test locally, so I'd rather just leverage that instead. It has all the rules set up and drops some static html in an artifact - it just doesn't get served anywhere. Handling it in the workflow, where all the other deployment configuration of my project is living, feels better than scattering it over different tools or github specific options.
Is there already an action in the marketplace that allows me to do something like this?
name: CI
on: [push]
jobs:
... # do stuff like building my-project-v1.2.3.whl, testing, etc.
release_docs:
steps:
- uses: actions/sphinx-to-pages#v1 # I wish this existed
with:
dependencies:
- some-sphinx-extension
- dist/my-project*.whl
apidoc_args:
- "--no-toc"
- "--module-first"
- "-o docs/autodoc"
- "src/my-project"
build-args:
- "docs"
- "public" # the content of this folder will then be served at
# https://my_gh_name.github.io/my_project/
In other words, I'd like to still have control over how the build happens and where artifacts are dropped, but do not want to need to handle the interaction with readthedocs or github-pages.
###Actions that I tried
❌ deploy-to-github-pages: runs the docs build in an npm container - will be inconvenient to make it work with python and sphinx
❌ gh-pages-for-github-action: no documentation
❌ gh-pages-deploy: seems to target host envs like jekyll instead of static content, and correct usage with yml syntax not yet documented - I tried a little and couldn't get it to work
❌ github-pages-deploy: looks good, but correct usage with yml syntax not yet documented
✅ github-pages: needs a custom PAT in order to trigger rebuilds (which is inconvenient) and uploads broken html (which is bad, but might be my fault)
✅ deploy-action-for-github-pages: also works, and looks a little cleaner in the logs. Same limitations as the upper solution though, it needs a PAT and the served html is still broken.
The eleven other results when searching for github+pages on the action marketplace all look like they want to use their own builder, which sadly never happens to be sphinx.
In the case of managing sphinx using pip (requirements.txt), pipenv, or poetry, we can deploy our documentation to GitHub Pages as follows. For also other Python-based Static Site Generators like pelican and MkDocs, the workflow works as same. Here is a simple example of MkDocs. We just add the workflow as .github/workflows/gh-pages.yml
For more options, see the latest README: peaceiris/actions-gh-pages: GitHub Actions for GitHub Pages 🚀 Deploy static files and publish your site easily. Static-Site-Generators-friendly.
name: github pages
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Upgrade pip
run: |
# install pip=>20.1 to use "pip cache dir"
python3 -m pip install --upgrade pip
- name: Get pip cache dir
id: pip-cache
run: echo "::set-output name=dir::$(pip cache dir)"
- name: Cache dependencies
uses: actions/cache#v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: python3 -m pip install -r ./requirements.txt
- run: mkdocs build
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site
I got it to work, but there is no dedicated action to build and host sphinx docs on either github pages or readthedocs as of yet, so as far as I am concerned there is quite a bit left to be desired here.
This is my current release_sphinx job that uses the deploy-action-for-github-pages action and uploads to github-pages:
release_sphinx:
needs: [build]
runs-on: ubuntu-latest
container:
image: python:3.6
volumes:
- dist:dist
- public:public
steps:
# check out sources that will be used for autodocs, plus readme
- uses: actions/checkout#v1
# download wheel that was build and uploaded in the build step
- uses: actions/download-artifact#v1
with:
name: distributions
path: dist
# didn't need to change anything here, but had to add sphinx.ext.githubpages
# to my conf.py extensions list. that fixes the broken uploads
- name: Building documentation
run: |
pip install dist/*.whl
pip install sphinx Pallets-Sphinx-Themes
sphinx-apidoc --no-toc --module-first -o docs/autodoc src/stenotype
sphinx-build docs public -b dirhtml
# still need to build and set the PAT to get a rebuild on the pages job,
# apart from that quite clean and nice
- name: github pages deploy
uses: peaceiris/actions-gh-pages#v2.3.1
env:
PERSONAL_TOKEN: ${{ secrets.PAT }}
PUBLISH_BRANCH: gh-pages
PUBLISH_DIR: public
# since gh-pages has a history, this step might no longer be necessary.
- uses: actions/upload-artifact#v1
with:
name: documentation
path: public
Shoutout to the deploy action's maintainer, who resolved the upload problem within 8 minutes of me posting it as an issue.
Recently just started using Ansible and I have run into a problem. In one of my YAML structures I have defined something like this:
---
# file: main.yml
#
# Jenkins variables for installation and configuration
jenkins:
debian:
# Debian repository containing Jenkins and the associated key for accessing the repository
repo: 'deb http://pkg.jenkins-ci.org/debian binary/'
repo_key: 'http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key'
# Dependencies needed for Jenkins to run properly
dependencies:
- 'openjdk-7-jdk'
- 'git-core'
- 'curl'
port: 8080
# Installation directory
home: '/opt/jenkins'
# Path to the location of the main Jenkins-cli binary
bin_path: '{{jenkins.home}}/jenkins-cli.jar'
# Path to the location of the Jenkins updates file
updates_path: '{{jenkins.home}}/updates_jenkins.json'
Ansible gives me an error like this from a specific task:
"recursive loop detected in template string:
{{jenkins.home}}/updates_jenkins.json"
I've narrowed it down to the problem being with the fact bin_path and updates_path both refer to 'home'. Is there a way around this? It kind of sucks that I would need to define '/opt/jenkins' multiple times.
As far as I know that this is a limitation of jinja2 variables. it was working with the old ${} and broke since 1.4. I am not sure if they fixed that in 1.5.
My temp solution would be removing home from the "jenkins"
---
# file: main.yml
#
# Jenkins variables for installation and configuration
# Installation directory
jenkins_home: '/opt/jenkins'
jenkins:
debian:
# Debian repository containing Jenkins and the associated key for accessing the repository
repo: 'deb http://pkg.jenkins-ci.org/debian binary/'
repo_key: 'http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key'
# Dependencies needed for Jenkins to run properly
dependencies:
- 'openjdk-7-jdk'
- 'git-core'
- 'curl'
port: 8080
# Path to the location of the main Jenkins-cli binary
bin_path: '{{jenkins_home}}/jenkins-cli.jar'
# Path to the location of the Jenkins updates file
updates_path: '{{jenkins_home}}/updates_jenkins.json'
This should do it:
bin_path: "{{ jenkins['home'] }}/jenkins-cli.jar"