I need to run big command in several jobs and save results in dynamically created variables.
My idea - save such command as variable and evaluate it in script sections of all jobs.
For example:
.grep_command: &grep_command
GREP_COMMAND: dotnet ef migrations list | grep "VERY_LONG_PATTERN_HERE"
job1:
variables:
<<: *grep_command
script:
# some job specific code
- echo $GREP_COMMAND
- VAR=$(${GREP_COMMAND}) # doesn't work
job2:
variables:
<<: *grep_command
script:
# some job specific code
- echo $GREP_COMMAND
- echo "VAR=$(${GREP_COMMAND})" > build.env # also doesn't work
I found the right way:
define command as script command and use it in script section, not variables:
.grep-command: &grep-command
- dotnet ef migrations list | grep "VERY_LONG_PATTERN_HERE"
job1:
script:
# some job specific code
- *grep-command
(By the way saving command as variable also works, just use it carefully, but I suppose it is not so clear - variables must stay variables, and commands - as commands. I find it bad practice to mix them)
Related
I have two jobs in the Gitlab CI file.
First job env_var_test generates the dotenv variables from the command.
echo '{"apple":red,"boy":"bar","cat":"white"}' | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]'
Second job env_var_retrive_test looks for a variable from env_var_test dotenv variables and if the variable match the predefined value of the CICD rules, it triggers
env_var_test:
stage: build
image: $CFIMAGE
script:
- echo '{"apple":red,"boy":"bar","cat":"white"}' | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > deploy.env
artifacts:
reports:
dotenv: deploy.env
tags:
- linux
env_var_retrive_test:
stage: deploy
image: $CFIMAGE
script:
- echo "[ $apple - $boy - $cat ]"
tags:
- linux
rules:
- if: '$boy == "bar"'
when: always
With this setup, I tested them and could see the variables are correctly printing echo "[ $apple - $boy - $cat ]". However, the job was not triggering if I defined the variables in the rules section
rules:
- if: '$boy == "bar"'
when: always
Please correct me if I'm doing it wrong or help me with any better approach to do the same.
-Thanks
https://docs.gitlab.com/ee/ci/yaml/#rules
You cannot use dotenv variables created in job scripts in rules, because rules are evaluated before any jobs run.
Feature request: https://gitlab.com/gitlab-org/gitlab/-/issues/235812. Please vote for it.
When you are comparing Variable to some value, you should not enclose in quotes. You should use - if $boy == "bar"
I have an array in bash that looks like:
libs=test1 test2
I would like to use the output of the bash script in a subsequent step in an ADO pipeline. How can I loop over this in ADO with pipeline variables like:
- ${{ each value in $(libs) }}:
- script: echo $value
- task: Npm#1
inputs:
command: 'custom'
customCommand: npm publish --ignore-scripts
workingDir: 'dist/libs/$(value)'
publishRegistry: 'useFeed'
publishFeed: 'feed'
Unfortunately, you cannot as each statement works only with parameters and not variables (as per documentation).
Moreover, variables are only string while parameters have different data type.
I have a .yml file and it has content like below:
env_values:
stage: build
before_script: some-value
So I need to read and get the values for stage and before_script, so they will have build and some-value respectively.
Is there a good easy workaround in bash scripting to read this file and get the values?
Assume your .yml file is t.yml, this Bash script gets the 2 values you need:
arr=($(cat t.yml | grep "stage:"))
stage=${arr[1]}
echo $stage
arr=($(cat t.yml | grep "before_script:"))
before_script=${arr[1]}
echo $before_script
We have a project using Azure Pipeline, relying on azure-pipelines.yml file at the repo's root.
When implementing a script step, it is possible to execute successive commands in the same step simply writing them on different lines:
- script: |
ls -la
pwd
echo $VALUE
Yet, if we have a single command that is very long, we would like to be able to break it on several lines in the YAML file, but cannot find the corresponding syntax?
You didn't specify your agent OS so I tested on both windows-latest and ubuntu-latest. Note that the script task runs a bit differently on these 2 environments. On Windows, it uses cmd.exe. On Ubuntu, it uses bash. Therefore, you have to use the correct syntax.
On Windows:
pool:
vmImage: 'windows-latest'
steps:
- script: |
mkdir ^
test ^
-p ^
-v
On Ubuntu:
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
mkdir \
test \
-p \
-v
Those two files above work on my Azure DevOps.
At the moment, the only way we found for to break a single command on multiple line is using YAML folded style:
- script: >
echo
'hello world'
It is all about replacing | with >.
Notes:
It is not possible to introduce extra indentation on the following lines! For example, trying to align all arguments given to a command would break the behaviour.
This style will replace newlines in the provided value with a simple white space. This means the script now can only contain a single command (maybe adding literal \n at the end of the line would actually introduce a linebreak in the string, but it feels backward compared to the usual approach of automatice linebreak unless an explicit continuation is added).
You can use '^' to break your command line into multiple lines. Check below exmaple. Below script will output 'hello world' like a single line command echo 'hello world'
- script: |
echo ^
'hello ^
world'
I'm using the CI Lint tester to try and figure out how to store an expected JSON result, which I later compare to a curl response. Neither of these work:
Attempt 1
---
image: ruby:2.1
script:
- EXPECT_SERVER_OUTPUT='{"message": "Hello World"}'
Fails with:
did not find expected key while parsing a block mapping at line 4 column 5
Attempt 2
---
image: ruby:2.1
script:
- EXPECT_SERVER_OUTPUT="{\"message\": \"Hello World\"}"
Fails with:
jobs:script config should be a hash
I've tried using various combinations of echo as well, without a working solution.
You could use literal block scalar1 style notation and put the variable definition and subsequent script lines on separate lines2 without worrying about quoting:
myjob:
script:
- |
EXPECT_SERVER_OUTPUT='{"message": "Hello World"}'
or you can escape the nested double quotes:
myjob:
script:
- "EXPECT_SERVER_OUTPUT='{\"message\": \"Hello World\"}'"
but you may also want to just use variables like:
myjob:
variables:
EXPECT_SERVER_OUTPUT: '{"message": "Hello World"}'
script:
- dothething.sh
Note: variables are by default expanded inside variable definitions so take care with any $ characters inside the variable value (they must be written as $$ to be literal). This feature can also be turned off.
1See this answer for an explanation of this and related notation
2See this section of the GitLab docs for more info on multi-line commands
I made it work like this:
script: |
"EXPECT_SERVER_OUTPUT='{\"message\": \"Hello World\"}'"
echo $EXPECT_SERVER_OUTPUT