My Github action is calling my build.rb script which depends on the pandoc executable being available to the system (this is actually done by a dependency, so I can't change it).
I added pandoc to the Github action as per the instructions here https://github.com/pandoc/pandoc-action-example but my build.rb script is still erroring with sh: 1: pandoc: not found.
I think maybe I have to add the docker-compiled pandoc to the PATH so it can be used by my script, but I'm not sure how to do that.
This is the result from the uses: docker://pandoc/core:2.9 step:
Run docker://pandoc/core:2.9
/usr/bin/docker run --name pandoccore29_2dccd6 --label 372a9e --workdir /github/workspace --rm -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_REF_NAME -e GITHUB_REF_PROTECTED -e GITHUB_REF_TYPE -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e GITHUB_STEP_SUMMARY -e RUNNER_OS -e RUNNER_ARCH -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/maxpleaner.github.io/maxpleaner.github.io":"/github/workspace" pandoc/core:2.9
Github action is shown below.
name: Ruby
on:
push:
branches: [ master ]
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version: ['3.0']
steps:
- uses: actions/checkout#v3
- name: Setup Ruby
uses: ruby/setup-ruby#v1
with:
ruby-version: ${{ matrix.ruby-version }}
bundler-cache: true # runs 'bundle install' and caches installed gems automatically
- uses: docker://pandoc/core:2.9
- name: Build
run: bundle exec ruby build.rb
- name: Deploy
uses: s0/git-publish-subdir-action#develop
env:
REPO: self
BRANCH: gh-pages # The branch name where you want to push the assets
FOLDER: dist # The directory where your assets are generated
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # GitHub will automatically add this - you don't need to bother getting a token
MESSAGE: "Build: ({sha}) {msg}" # The commit message
Related
I have built a Golang project, I want to deploy the app when successfully tested with GitLab ci, but when do some tests, it fails because cannot connect to MySQL.
I want to use the Golang image and MySQL image in one stage.
This is my current pipeline. On stage test, on before script fails (/bin/bash: line 130: mysql: command not found)
# To contribute improvements to CI/CD templates, please follow the Development guide
at:
# https://docs.gitlab.com/ee/development/cicd/templates.html
# This specific template is located at:
# https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Go.gitlab-ci.yml
image: golang:latest
services:
- mysql:latest
stages:
- test
- build
- deploy
variables:
MYSQL_DATABASE: "db"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
MYSQL_ROOT_PASSWORD: "password"
format:
stage: test
variables:
# Configure mysql environment variables (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: $MYSQL_DATABASE
MYSQL_PASSWORD: $MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
services:
- mysql:latest
before_script:
- mysql --version
script:
- go fmt $(go list ./... | grep -v /vendor/)
- go vet $(go list ./... | grep -v /vendor/)
- go test -race $(go list ./... | grep -v /vendor/)
compile:
stage: build
script:
- mkdir -p typing
- go build -o typing ./...
artifacts:
paths:
- typing
deploy:
image: google/cloud-sdk:alpine
stage: deploy
allow_failure: true
script:
- "which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )"
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY" | base64 -d)
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- (echo "$SSH_PRIVATE_KEY" | base64 -d) > file
- echo "$SSH_PUBLIC_KEY" > file.pub
- chmod 600 file
- echo $SERVICE_ACCOUNT > file.json
- gcloud auth activate-service-account --key-file file.json
- gcloud compute scp typing/* --project="project-id" --zone="zone" vm-name:/home/ubuntu
- ssh -i file ubuntu#public-ip 'sudo ./kill.sh; sudo ./start.sh'
artifacts:
paths:
- typing
How can I achieve that?
Thanks in advance.
In the stage test, the job is based on the golang image as a result it does not come packaged with the MySQL client.
In order to reach the MySQL service you defined you need to install the client
If I am not mistaken the Golang image is based on debian, so something like this
before_script:
- apt get-update
- apt get-install -y default-mysql-client
- mysql --version
Below I have shown my .gitlab-ci.yml file:
variables:
timezone: "Europe/Vienna"
stages:
- style
- build
style:
stage: style
script:
- sudo docker run --rm -v $PWD:/code omercnet/pycodestyle
build:
stage: build
script:
blla blla
- sudo docker exec ${CI_PROJECT_PATH_SLUG} /bin/bash -c "./install.sh"
blla blla
Next, it is my install.sh script:
#!/bin/bash
blla blla
echo -e "\nmx_download_url: https://dl.bintray.com/random" >> /etc/random-installer/setup.yml
sed -i s+old text+new text+g etc/random-installer/setup.yml
blla blla
How I can use the value of the variable timezone from the yml file in the new text in the bash script?
timezone is visible as environment variable for the gitlab jobs, however install.sh scripts is running from another context - it's executed by docker. Docker itself has access to $timezone, however containers it's operating with don't.
To solve your problem you can explicitly pass it:
- sudo docker exec -e TIMEZONE=$timezone ${CI_PROJECT_PATH_SLUG} /bin/bash -c "./install.sh"
This will set environment variable $TIMEZONE for install.sh inside the container.
Inside .gitlab-ci.yml we define a variable (which is just the artifactId name for the project) ARTIFACT_ID: myMicroservice-1
This variable ARTIFACT_ID is sent to a general microservice which has all the scripts to publish/deploy docker, etc.
How can I read this variable direct from POM file?
pom:
<artifactId>myMicroservice-1</artifactId>
.gitlab-ci.yml:
variables:
SKIP_UNIT_TESTS_FLAG: "true"
ARTIFACT_ID: myMicroserverName
IS_OSL: "true"
KUBERNETES_NAMESPACE: test
Here is how we do it.
Value is extracted from the pom.xml based on its XPath.
We use xmllint tool from libxml2-utils, but there are various other tools for that.
Then value is saved as an environment variable in a file, which is passed to further GitLab jobs as artifact.
stages:
- prepare
- build
variables:
VARIABLES_FILE: ./variables.txt # "." is required for sh based images
POM_FILE: pom.xml
get-version:
stage: prepare
image: ubuntu
script:
- apt-get update
- apt-get install -y libxml2-utils
- APP_VERSION=`xmllint --xpath '/*[local-name()="project"]/*[local-name()="version"]/text()' $POM_FILE`
- echo "export APP_VERSION=$APP_VERSION" > $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE
build:
stage: build
image: docker:latest
script:
- source $VARIABLES_FILE
- echo "Here use $APP_VERSION as you like"
This is the script to grab information from pom, in this case, artifactId:
- export myARTIFACT_ID=$(mvn exec:exec -q -Dexec.executable=echo -Dexec.args='${project.artifactId}')
- if [[ "$myARTIFACT_ID" == *finishWithWhatEverName]]; then export myVariable="false"; else export myVariable="true";
Then you can use myVariable to whatever you want.
I tried Angel's solution, but got an error:
/bin/bash: line 87: mvn: command not found
Finally I succeeded when I used the following to extract the tag value:
- export ARTIFACT_ID=$(cat pom.xml | grep "<artifactId>" | head -1 | cut -d">" -f2 | cut -d"<" -f1 | awk '{$1=$1;print}')
jobname:
stage: stage
before_script:
- export "MAVEN_ID=$(mvn help:evaluate -Dexpression=project.id -q -DforceStdout)"
- >
IFS=: read -r MAVEN_GROUPID MAVEN_ARTIFACTID MAVEN_PACKAGING MAVEN_VERSION <<< ${MAVEN_ID}
script:
- >
echo -e "groupId: ${MAVEN_GROUPID}\nartifactId: ${MAVEN_ARTIFACTID}\nversion: ${MAVEN_VERSION}\npackaging: ${MAVEN_PACKAGING}"
mvn help:evaluate -Dexpression=project.id -q -DforceStdout prints artifact identification information in the following format: com.group.id:artifactid:packaging:version
MAVEN_ID variable is parsed using IFS based on the colon (:) as a separator to get common maven variables like artifactId, groupId, version and packaging (explanation)
later these variables can be used in the code, for example for echoing values
IFS is a bash feature, thus corresponding GitLab runner should have bash installed
I'm trying to do a condition in my tests that print echo "showing dev branch" if my branch name is development but I'm receiving this error
if [ "${CIRCLE_BRANCH}" == "development"]; then echo "showing dev branch" fi
bash: -c: line 2: syntax error: unexpected end of file
if [ "${CIRCLE_BRANCH}" == "development"]; then echo "showing dev branch" fi returned exit code 1
See my circle.yml below:
general:
artifacts:
- "test_evidences"
branches:
only:
- development
machine:
node:
version: 6.10.3
dependencies:
pre:
- curl -L -o google-chrome.deb https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
- sudo dpkg -i google-chrome.deb
- sudo sed -i 's|HERE/chrome\"|HERE/chrome\" --disable-setuid-sandbox|g' /opt/google/chrome/google-chrome
- rm google-chrome.deb
- npm install
- npm install -g grunt grunt-cli
override:
- node_modules/.bin/webdriver-manager update
test:
pre:
- sleep 60
override:
- if [ "${CIRCLE_BRANCH}" == "development"]; then
echo "showing dev branch"
fi
- grunt apiTests
- node_modules/.bin/protractor conf.js
- sed -i -- 's,//,/,g' test_evidences/htmlReport.html
Problem solved!
My new circle.yml file is:
#!/usr/bin/env bash
general:
artifacts:
- "test_evidences"
branches:
only:
- development
machine:
node:
version: 6.10.3
dependencies:
pre:
- curl -L -o google-chrome.deb https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
- sudo dpkg -i google-chrome.deb
- sudo sed -i 's|HERE/chrome\"|HERE/chrome\" --disable-setuid-sandbox|g' /opt/google/chrome/google-chrome
- rm google-chrome.deb
- npm install
- npm install -g grunt grunt-cli
override:
- node_modules/.bin/webdriver-manager update
test:
pre:
- sleep 60
override:
- if [ "${CIRCLE_BRANCH}" == "development" ]; then
echo "showing dev branch";
fi
- grunt apiTests
- node_modules/.bin/protractor conf.js
- sed -i -- 's,//,/,g' test_evidences/htmlReport.html
Is it possible to add a setting to cache my docker image anywhere in the travis configuration ? Mine is a bigger docker image and it takes a while for it to download.
Any suggestions ?
Simplest solution today (October 2019) is to add the following to .travis.yml:
cache:
directories:
- docker_images
before_install:
- docker load -i docker_images/images.tar || true
before_cache:
- docker save -o docker_images/images.tar $(docker images -a -q)
See Caching Docker Images on Build #5358
for the answer(s). For Docker 1.12 available now on Travis, it is recommended to manually cache the images. For the Docker 1.13, you could use its --cache-from when it is on Travis.
Save:
before_cache:
# Save tagged docker images
- >
mkdir -p $HOME/docker && docker images -a --filter='dangling=false' --format '{{.Repository}}:{{.Tag}} {{.ID}}'
| xargs -n 2 -t sh -c 'test -e $HOME/docker/$1.tar.gz || docker save $0 | gzip -2 > $HOME/docker/$1.tar.gz'
Load:
before_install:
# Load cached docker images
- if [[ -d $HOME/docker ]]; then ls $HOME/docker/*.tar.gz | xargs -I {file} sh -c "zcat {file} | docker load"; fi
Also need to declare a cache folder:
cache:
bundler: true
directories:
- $HOME/docker
Docker images are not recommended to be cached regard to Travis document here
https://docs.travis-ci.com/user/caching/#things-not-to-cache
I just found the following approach as discussed in this article.
services:
- docker
before_script:
- docker pull myorg/myimage || true
script:
- docker build --pull --cache-from myorg/myimage --tag myorg/myimage .
- docker run myorg/myimage
after_script:
- docker images
before_deploy:
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
deploy:
provider: script
script: docker push myorg/myimage
on:
branch: master
This works for me:
Update the desired docker image name instead of <IMAGE_NAME_HERE> (3 places).
You can also use the same configuration for multiple images, docker save can handle multiple images, just make sure to pull them before trying to save them.
services:
- docker
cache:
directories:
- docker-cache
before_script:
- |
filename=docker-cache/saved_images.tar
if [[ -f "$filename" ]]; then docker load < "$filename"; fi
mkdir -p docker-cache
docker pull <IMAGE_NAME_HERE>
docker save -o "$filename" <IMAGE_NAME_HERE>
script:
- docker run <IMAGE_NAME_HERE>...