Nested stages in CI jobs - continuous-integration

In GitLab job description it is possible to specify stages, where jobs will be grouped by stages and executed in parallel. Imagine that I'd like to do the following:
Build a release binary.
Build a release Docker image for release binary.
Build a debug binary.
Build a debug Docker image for debug binary.
With no nested stages, I can try building release and debug binaries at the same time, and later build both images. But, this is terribly inefficient because one of the builds takes a lot longer than the other, yet, I cannot start creating an image for the build that finished first.
If only it was possible to arrange for the Docker image building job to start as soon as either the first build finished, it would be perfect. One way this might have been possible is if I could specify nested stages, where, say, stage build-all had two nested stages: build-release and build-debug, each composed of two jobs: build-release-binary, build-release-image, and, similarly, build-debug-binary, build-debug-image.
Since I'm new to GitLab, I would also appreciate a negative answer, i.e. knowing that it is not possible is also useful.

Problem
To first confirm your problem, I imagine you have a setup like this:
.gitlab-ci.yml:
stages:
- build-binaries
- build-images
# Binaries
build-release-binary:
stage: build-binaries
script:
- make release
build-debug-binary:
stage: build-binaries
script:
- make debug
# Docker Images
build-release-image:
stage: build-images
dependencies:
- build-release-binary
script:
- docker build -t wvxvw:release .
build-debug-image:
stage: build-images
dependencies:
- build-debug-binary
script:
- docker build -t wvxvw:debug .
And that should produce a pipeline like this:
build-binaries build-images
______________________ _____________________
| | | |
| build-release-binary |----+--+--->| build-release-image |
|______________________| / \ |_____________________|
| |
______________________ | | _____________________
| | | | | |
| build-debug-binary |---/ \-->| build-debug-image |
|______________________| |_____________________|
Assessment
You are correct that no jobs from the build-images stage will begin until all jobs from the build-binaries stage complete (even though the job's dependencies are met).
There is a GitLab issue open that discusses this:
gitlab-org/gitlab-ce#49964: Allow running a CI job if its dependencies succeeded
I've added a comment pointing out the improvements that could be made in this case. In the future, the pipeline might then look like this (note the separate connecting lines):
build-binaries build-images
______________________ _____________________
| | | |
| build-release-binary |----------->| build-release-image |
|______________________| |_____________________|
______________________ _____________________
| | | |
| build-debug-binary |----------->| build-debug-image |
|______________________| |_____________________|
Workaround
Sometimes if you have sequential tasks, it's easier to simply run them in a single job. This avoids the overhead of firing up another job, when you already have everything ready to go in the first job.
As a work-around, you could simply flatten your pipeline into a single stage which would build both the binary and the Docker image:
.gitlab-ci.yml:
stages:
- build
build-release:
stage: build
script:
- make release
- docker build -t wvxvw:release .
build-debug:
stage: build
script:
- make debug
- docker build -t wvxvw:debug .
Your pipeline would then of course look like this:
build
_______________
| |
| build-release |
|_______________|
_______________
| |
| build-debug |
|_______________|
I've worked with a team to simplify their pipeline in a similar manner, and we were pleased with the results.

As of Gitlab 12.2, this was fixed with the needs clause, so arbitrary DAGs are now allowed. You can visualize the graph as of Gitlab 13.1 (Beta).
For example, imagine you want to run pylint and unit tests in parallel, and then check the coverage of your unit tests, but without waiting for pylint to finish.
stages:
- Checks
- SecondaryChecks
pylint:
stage: Checks
script: pylint
unittests:
stage: Checks
script: coverage run -m pytest -rs --verbose
testcoverage:
stage: SecondaryChecks
needs: ["unittests"]
script: coverage report -m | grep -q "TOTAL.*100%"
Note that 'needs' only works for targets defined in previous stages. Hence the need for two stages here.

Related

GitHubActions on Windows host (powershell?): exit code of previous lines being ignored

I had this step in a macOS lane:
jobs:
macOS_build:
runs-on: macOS-latest
steps:
- uses: actions/checkout#v1
- name: Build in DEBUG and RELEASE mode
run: ./configure.sh && make DEBUG && make RELEASE
Then I successfully split it up this way:
jobs:
macOS_build:
runs-on: macOS-latest
steps:
- name: Build in DEBUG and RELEASE mode
run: |
./configure.sh
make DEBUG
make RELEASE
This conversion works because if make DEBUG fails, make RELEASE won't be executed and the whole step is marked as FAILED by GitHubActions.
However, trying to convert this from the Windows lane:
jobs:
windows_build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v1
- name: Build in DEBUG and RELEASE mode
shell: cmd
run: configure.bat && make.bat DEBUG && make.bat RELEASE
To this:
jobs:
windows_build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v1
- name: Build in DEBUG and RELEASE mode
shell: cmd
run: |
configure.bat
make.bat DEBUG
make.bat RELEASE
Doesn't work, because strangely enough, only the first line is executed. So I tried trying to change the shell attribute to powershell:
jobs:
windows_build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v1
- name: Build in DEBUG and RELEASE mode
shell: powershell
run: |
configure.bat
make.bat DEBUG
make.bat RELEASE
However this fails with:
configure.bat : The term 'configure.bat' is not recognized as the name
of a cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the path
is correct and try again.
Then I saw this other SO answer, so I converted it to:
jobs:
windows_build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v1
- name: Build in DEBUG and RELEASE mode
shell: powershell
run: |
& .\configure.bat
& .\make.bat DEBUG
& .\make.bat RELEASE
This finally launches all batch files independently, however it seems to ignore the exit code (so if configure.bat fails, it still runs the next lines).
Any idea how to separate lines in a GithubActions workflow properly?
In PowerShell, you'll have to check the automatic $LASTEXITCODE variable after each call if you want to take action on the (nonzero) exit code of the most recently executed external program or script:
if ($LASTEXITCODE) { exit $LASTEXITCODE }
If you want to keep the code small, you could check for intermediate success vs. failure via the automatic $? variable, which is a Boolean that contains $true if the most recent command or expression succeeded, which in the case of external programs is inferred if the exit code is 0:
.\configure.bat
if ($?) { .\make.bat DEBUG }
if ($?) { .\make.bat RELEASE }
exit $LASTEXITCODE
Note that if you were to use PowerShell (Core) 7+, you could use the bash-like approach, since && and ||, the pipeline-chain operators, are now supported - as long as you end each statement-internal line with &&, you can place each call on its own line:
# PSv7+
.\configure.bat &&
.\make.bat DEBUG &&
.\make.bat RELEASE
However, note that any nonzero exit code is mapped onto 1 when the PowerShell CLI is called via -Command, which is what I presume happens behind the scenes, and assuming that an external program is called last. That is, the specific nonzero exit code is lost. If it is of interest, append an exit $LASTEXITCODE line to the above.

Stop github action matrix case

I want to use a github action matrix for different build types, but there's one case of the matrix that I'm not interested in supporting. How do I stop this case from running but still get the build to marked successfully.
In this particular case I want to build Windows and Ubuntu, 32bit and 64bit but I'm not interested in supporting 32bit on Ubuntu. So my matrix would be:
strategy:
fail-fast: false
matrix:
os: [windows-latest, ubuntu-latest]
platform: ['x64', 'x86']
My current solution is to stop each action running by adding an if expression:
- name: Build Native
if: ${{ ! (matrix.os == 'ubuntu-18.04' && matrix.platform == 'x86') }}
While this works okay, I feel there ought to be a more elegant way of solving this. Can anyone help make my yaml script more beautiful?
Perhaps the strategy.matrix.exclude directive is suitable?
From the documentation:
You can remove a specific configurations defined in the build matrix
using the exclude option. Using exclude removes a job defined by the
build matrix.
So in your case, probably something like this:
strategy:
matrix:
os: [windows-latest, ubuntu-latest]
platform: ['x64', 'x86']
exclude:
- os: ubuntu-latest
platform: x86
There are situations where one wants to include or exclude specific matrix coordinates so as not to run some of them, yet (stretching the question a bit) also still want the job to run for a couple of these coordinates, so as to track the evolution of it across commits, while not blocking the whole process.
In that situation, continue-on-error at the job level, combined with matrix include and exclude is very useful:
Prevents a workflow run from failing when a job fails. Set to true to allow a workflow run to pass when this job fails.
This is similar to GitLab CI's allow_failure, although at time of writing GitHub Actions UI only has two states (red failed and green passed) whereas GitLab introduces a third one (orange warning) in that case.
Here is a real-life workflow example:
jobs:
linux:
continue-on-error: ${{ matrix.experimental }}
strategy:
fail-fast: false
matrix:
os:
- ubuntu-20.04
container:
- 'ruby:2.0'
- 'ruby:2.1'
- 'ruby:2.2'
- 'ruby:2.3'
- 'ruby:2.4'
- 'ruby:2.5'
- 'ruby:2.6'
- 'ruby:2.7'
- 'ruby:3.0'
- 'ruby:2.1-alpine'
- 'ruby:2.2-alpine'
- 'ruby:2.3-alpine'
- 'ruby:2.4-alpine'
- 'ruby:2.5-alpine'
- 'ruby:2.6-alpine'
- 'ruby:2.7-alpine'
- 'jruby:9.2-jdk'
experimental:
- false
include:
- os: ubuntu-20.04
container: 'ruby:3.0.0-preview2'
experimental: true
- os: ubuntu-20.04
container: 'ruby:3.0.0-preview2-alpine'
experimental: true

How to sign APK in gitlab CI and send release to slack

I want to build an android signed APK and receive release APK through the slack channel. Tried the below script but it's not working due to my app written with JDK 8.
This is the script which I used.
image: jangrewe/gitlab-ci-android
cache:
key: ${CI_PROJECT_ID}
paths:
- .gradle/
before_script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
stages:
- build
assembleDebug:
stage: build
only:
- development
- tags
script:
- ./gradlew assembleDebug
- |
curl \
-F token="${SLACK_CHANNEL_ACCESS_TOKEN}" \
-F channels="${SLACK_CHANNEL_ID}" \
-F initial_comment="Hello team! Here is the latest APK" \
-F "file=#$(find app/build/outputs/apk/debug -name 'MyApp*')" \
https://slack.com/api/files.upload
artifacts:
paths:
- app/build/outputs/apk/debug
view raw
But is showing some java classes not found. (That java files deprecated in Java 11)
First, you need to setup slack authentication keys.
Create App in Slack
Go to Authentication Section and Generate Authentication Key.
Get Channel Id which you want to receive messages.
Mention your app name in your slack thread and add the app to the channel.
Setup those keys in your GitLab ci setting variables.
SLACK_CHANNEL_ACCESS_TOKEN = Access Token Generated by Slack App
SLACK_CHANNEL_ID = Channel Id (Check URL Last section for the channel id)
8.Copy your existing Keystore file to the repository. (Please do this if your project is private.)
7.Change GitLab script's content to the below code.
Make sure to change certificate password,key password and alias.
image: openjdk:8-jdk
variables:
# ANDROID_COMPILE_SDK is the version of Android you're compiling with.
# It should match compileSdkVersion.
ANDROID_COMPILE_SDK: "29"
# ANDROID_BUILD_TOOLS is the version of the Android build tools you are using.
# It should match buildToolsVersion.
ANDROID_BUILD_TOOLS: "29.0.3"
# It's what version of the command line tools we're going to download from the official site.
# Official Site-> https://developer.android.com/studio/index.html
# There, look down below at the cli tools only, sdk tools package is of format:
# commandlinetools-os_type-ANDROID_SDK_TOOLS_latest.zip
# when the script was last modified for latest compileSdkVersion, it was which is written down below
ANDROID_SDK_TOOLS: "6514223"
# Packages installation before running script
before_script:
- apt-get --quiet update --yes
- apt-get --quiet install --yes wget tar unzip lib32stdc++6 lib32z1
# Setup path as android_home for moving/exporting the downloaded sdk into it
- export ANDROID_HOME="${PWD}/android-home"
# Create a new directory at specified location
- install -d $ANDROID_HOME
# Here we are installing androidSDK tools from official source,
# (the key thing here is the url from where you are downloading these sdk tool for command line, so please do note this url pattern there and here as well)
# after that unzipping those tools and
# then running a series of SDK manager commands to install necessary android SDK packages that'll allow the app to build
- wget --output-document=$ANDROID_HOME/cmdline-tools.zip https://dl.google.com/android/repository/commandlinetools-linux-${ANDROID_SDK_TOOLS}_latest.zip
# move to the archive at ANDROID_HOME
- pushd $ANDROID_HOME
- unzip -d cmdline-tools cmdline-tools.zip
- popd
- export PATH=$PATH:${ANDROID_HOME}/cmdline-tools/tools/bin/
# Nothing fancy here, just checking sdkManager version
- sdkmanager --version
# use yes to accept all licenses
- yes | sdkmanager --sdk_root=${ANDROID_HOME} --licenses || true
- sdkmanager --sdk_root=${ANDROID_HOME} "platforms;android-${ANDROID_COMPILE_SDK}"
- sdkmanager --sdk_root=${ANDROID_HOME} "platform-tools"
- sdkmanager --sdk_root=${ANDROID_HOME} "build-tools;${ANDROID_BUILD_TOOLS}"
# Not necessary, but just for surity
- chmod +x ./gradlew
# Make Project
assembleDebug:
interruptible: true
stage: build
only:
- tags
script:
- ls
- last_v=$(git describe --abbrev=0 2>/dev/null || echo '')
- tag_message=$(git tag -l -n9 $last_v)
- echo $last_v
- echo $tag_message
- ./gradlew assembleRelease
-Pandroid.injected.signing.store.file=$(pwd)/Certificate.jks
-Pandroid.injected.signing.store.password=123456
-Pandroid.injected.signing.key.alias=key0
-Pandroid.injected.signing.key.password=123456
- |
curl \
-F token="${SLACK_CHANNEL_ACCESS_TOKEN}" \
-F channels="${SLACK_CHANNEL_ID}" \
-F initial_comment="$tag_message" \
-F "file=#$(find app/build/outputs/apk/release -name 'app*')" \
https://slack.com/api/files.upload
artifacts:
paths:
- app/build/outputs/

How to detect compiler warnings in gitlab CI

In the steps of setting up CI builds on our gitlab server, I can't seem to find information on how to set up the detection of compiler warnings. Example build output:
[100%] Building CXX object somefile.cpp.o
/home/gitlab-runner/builds/XXXXXXX/0/group/project/src/somefile.cpp:14:2: warning: #warning ("This is a warning to test gitlab") [-Wcpp]
#warning("This is a warning to test gitlab")
^
However the build result is success instead of warning or something similar. Ideally the results wuold also be visible on the merge request on the feature (and block the merge if possible).
I can't imagine I'm the only one trying to achieve this, so I am probably looking in the wrong direction. The 'best' solution I found is to somehow manually parse the build output and generate a JUnit report.
How would I go about doing this without allowing the build job to fail, since I would like the job to fail when compiler errors occur.
Update
For anyone stumbling across this question later, and in lieu of a best practice, this is how I solved it:
stages:
- build
- check-warnings
shellinspector:
stage: build
script:
- cmake -Bcmake-build -S.
- make -C cmake-build > >(tee make.output) 2> >(tee make.error)
artifacts:
paths:
- make.output
- make.error
expire_in: 1 week
analyse build:
stage: check-warnings
script:
- "if [[ $(cat make.error | grep warning -i) ]]; then cat make.error; exit 1; fi"
allow_failure: true
This stores the build output errors in make.error in the first stage, the next stage then queries that file for warnings and fails that stage with allow_failure: true to create the passed with warning pipeline status I was looking for.
It seems that the solution to such need (e.g., see the issue "Add new CI state: has-warnings" https://gitlab.com/gitlab-org/gitlab-runner/issues/1224) has been to introduce the allow_failure option so that one job can be the compilation itself, which is not allowed to fail (if it does, then the pipeline fails) and another job can be the detection of such warnings which is allowed to fail (if one is found then the pipeline will not fail).
Also the possibility of defining warning regex in the .gitlab-ci.yml has been requested but it does not exist yet.

How to get and build all dependencies of go package which has no main function?

I have a following project:
myrepo/ # root package, no main() function
|-common/ # common utilties package
|-cmd/ # CLI root
| |-myrepo-a/ # CLI application A
| | |-main.go # CLI application A main package/function,
| | # setting up feature A for CLI use
| |-myrepo-b/ # CLI application B
| |-main.go # CLI application B main package/function,
| # setting up feature B for CLI use
|-aspecific/ # package with generic implementation for feature A
|-bspecific/ # package with generic implementation for feature B
|-generate.go # file dispatching //go:generate instructions
How to get and build dependencies of such myrepo project?
go get -v ./... invocation yields errors:
runtime.main_main·f: relocation target main.main not defined
runtime.main_main·f: undefined: "main.main"
Can i avoid manually specifying each of subpackages for such invocation as in go get ./common/... ./cmd/... ./aspecific/... ./bspecific/..., as such hard-coding might result in desync between actual project state and packages getting built?

Resources