see sample script below. somehow resembles how our pipeline looks like but not really obviously.
steps:
- step: &test-sonar
name: test and analyze on SonarCloud
script:
- {some command}
- {some command}
- {some command}
- {some command}
- if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- pip install pytest
- pytest --cov=tests/ --cov-report xml:coverage-reports/coverage-report.xml --junitxml=test-reports/report.xml
- {some command}
- {some command}
- {some command}
- {some command}
- {some command}
- {some command}
- {some command}
- pipe: sonarsource/sonarcloud-scan:1.4.0
- step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.6
this script is what we run whenever we merge into the master branch.
this would mostly also be the same set of scripts but the flags in the pytest command are slightly different.
and again for a scheduled pipeline, the scripts would be mostly the same with some slight changes the flags of the pytest command.
I wouldn't want to repeat the same script 3 times and I'm not sure how to make this a bit more reusable.
Only thing I can think of is using bitbucket variables to change how pytest is executed depending on the type of pipeline but I'm still wrapping my head around that as well.
Some of my pipelines share a common initialization that I chain into a single instruction with the && operator and store it in a yaml anchor like
definitions:
yaml-anchors:
- &common-init-script >-
command1
&& command2
&& command3
- &aaa-step
script:
- *common-init-script
- some-command
- &bbb-step
script:
- *common-init-script
- some-different-command
pipelines:
...
my-pipeline:
- step: *aaa-step
- step: *bbb-step
Hopefully, the && operator will shortcircuit if anything failed (granted if all your commands will exit with a non-zero status code on failures), but the drawback is that the command chain is reckoned as a single instruction by Bitbucket so you will loose per-instruction time measures and also the console output will be concatenated so it is sometimes hard to tell when each command output starts or ends.
Ideally, you would store yaml lists of commands into anchors and later merge them with other inline sequences of commands but this idea was having a bad reception last time I checked https://github.com/yaml/yaml/issues/48 the main points being
It should be up to the application (bitbucket pipelines parser) to tell if lists of lists of commands should be flattened.
It is unclear how to introduce this into the yaml spec without causing havoc.
And now I can't find the jira issue but Atlassian was also reluctant to implement this on their end. So, here we are.
Related
I have two jobs in the Gitlab CI file.
First job env_var_test generates the dotenv variables from the command.
echo '{"apple":red,"boy":"bar","cat":"white"}' | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]'
Second job env_var_retrive_test looks for a variable from env_var_test dotenv variables and if the variable match the predefined value of the CICD rules, it triggers
env_var_test:
stage: build
image: $CFIMAGE
script:
- echo '{"apple":red,"boy":"bar","cat":"white"}' | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > deploy.env
artifacts:
reports:
dotenv: deploy.env
tags:
- linux
env_var_retrive_test:
stage: deploy
image: $CFIMAGE
script:
- echo "[ $apple - $boy - $cat ]"
tags:
- linux
rules:
- if: '$boy == "bar"'
when: always
With this setup, I tested them and could see the variables are correctly printing echo "[ $apple - $boy - $cat ]". However, the job was not triggering if I defined the variables in the rules section
rules:
- if: '$boy == "bar"'
when: always
Please correct me if I'm doing it wrong or help me with any better approach to do the same.
-Thanks
https://docs.gitlab.com/ee/ci/yaml/#rules
You cannot use dotenv variables created in job scripts in rules, because rules are evaluated before any jobs run.
Feature request: https://gitlab.com/gitlab-org/gitlab/-/issues/235812. Please vote for it.
When you are comparing Variable to some value, you should not enclose in quotes. You should use - if $boy == "bar"
I need to run big command in several jobs and save results in dynamically created variables.
My idea - save such command as variable and evaluate it in script sections of all jobs.
For example:
.grep_command: &grep_command
GREP_COMMAND: dotnet ef migrations list | grep "VERY_LONG_PATTERN_HERE"
job1:
variables:
<<: *grep_command
script:
# some job specific code
- echo $GREP_COMMAND
- VAR=$(${GREP_COMMAND}) # doesn't work
job2:
variables:
<<: *grep_command
script:
# some job specific code
- echo $GREP_COMMAND
- echo "VAR=$(${GREP_COMMAND})" > build.env # also doesn't work
I found the right way:
define command as script command and use it in script section, not variables:
.grep-command: &grep-command
- dotnet ef migrations list | grep "VERY_LONG_PATTERN_HERE"
job1:
script:
# some job specific code
- *grep-command
(By the way saving command as variable also works, just use it carefully, but I suppose it is not so clear - variables must stay variables, and commands - as commands. I find it bad practice to mix them)
I have an array in bash that looks like:
libs=test1 test2
I would like to use the output of the bash script in a subsequent step in an ADO pipeline. How can I loop over this in ADO with pipeline variables like:
- ${{ each value in $(libs) }}:
- script: echo $value
- task: Npm#1
inputs:
command: 'custom'
customCommand: npm publish --ignore-scripts
workingDir: 'dist/libs/$(value)'
publishRegistry: 'useFeed'
publishFeed: 'feed'
Unfortunately, you cannot as each statement works only with parameters and not variables (as per documentation).
Moreover, variables are only string while parameters have different data type.
We have a project using Azure Pipeline, relying on azure-pipelines.yml file at the repo's root.
When implementing a script step, it is possible to execute successive commands in the same step simply writing them on different lines:
- script: |
ls -la
pwd
echo $VALUE
Yet, if we have a single command that is very long, we would like to be able to break it on several lines in the YAML file, but cannot find the corresponding syntax?
You didn't specify your agent OS so I tested on both windows-latest and ubuntu-latest. Note that the script task runs a bit differently on these 2 environments. On Windows, it uses cmd.exe. On Ubuntu, it uses bash. Therefore, you have to use the correct syntax.
On Windows:
pool:
vmImage: 'windows-latest'
steps:
- script: |
mkdir ^
test ^
-p ^
-v
On Ubuntu:
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
mkdir \
test \
-p \
-v
Those two files above work on my Azure DevOps.
At the moment, the only way we found for to break a single command on multiple line is using YAML folded style:
- script: >
echo
'hello world'
It is all about replacing | with >.
Notes:
It is not possible to introduce extra indentation on the following lines! For example, trying to align all arguments given to a command would break the behaviour.
This style will replace newlines in the provided value with a simple white space. This means the script now can only contain a single command (maybe adding literal \n at the end of the line would actually introduce a linebreak in the string, but it feels backward compared to the usual approach of automatice linebreak unless an explicit continuation is added).
You can use '^' to break your command line into multiple lines. Check below exmaple. Below script will output 'hello world' like a single line command echo 'hello world'
- script: |
echo ^
'hello ^
world'
I'm using the CI Lint tester to try and figure out how to store an expected JSON result, which I later compare to a curl response. Neither of these work:
Attempt 1
---
image: ruby:2.1
script:
- EXPECT_SERVER_OUTPUT='{"message": "Hello World"}'
Fails with:
did not find expected key while parsing a block mapping at line 4 column 5
Attempt 2
---
image: ruby:2.1
script:
- EXPECT_SERVER_OUTPUT="{\"message\": \"Hello World\"}"
Fails with:
jobs:script config should be a hash
I've tried using various combinations of echo as well, without a working solution.
You could use literal block scalar1 style notation and put the variable definition and subsequent script lines on separate lines2 without worrying about quoting:
myjob:
script:
- |
EXPECT_SERVER_OUTPUT='{"message": "Hello World"}'
or you can escape the nested double quotes:
myjob:
script:
- "EXPECT_SERVER_OUTPUT='{\"message\": \"Hello World\"}'"
but you may also want to just use variables like:
myjob:
variables:
EXPECT_SERVER_OUTPUT: '{"message": "Hello World"}'
script:
- dothething.sh
Note: variables are by default expanded inside variable definitions so take care with any $ characters inside the variable value (they must be written as $$ to be literal). This feature can also be turned off.
1See this answer for an explanation of this and related notation
2See this section of the GitLab docs for more info on multi-line commands
I made it work like this:
script: |
"EXPECT_SERVER_OUTPUT='{\"message\": \"Hello World\"}'"
echo $EXPECT_SERVER_OUTPUT