I want to write a while loop in a GitLab CI file and here is the syntax that I've tried but seems to not be working.
Is the while loop authorized in GitLab or YAML files? Or are there other ways to write it?
Here is where I used it:
- while ($(curl -X GET ${URL} | jq -r '.task.status') != "SUCCESS")
ANALYSIS_ID=$(curl -X GET ${URL} | jq -r '.task.analysisId')
Why don't you write yourself a shell/python/whatever script and just run it from the CI?
YAML is not the suitable language to perform such a things (e.g. while loops, large conditions, for loops) and should not be used that way...
So I did this to resolve my issue , its to create a script in which I ve wrote the loop while and this script return the value that I needed, and then I called this script in my gitlab_ci file as below :
- ANALYSIS_ID=$(**./checkUrl.sh** $URL)
And if needed as an example the script that I used
#!/bin/bash
success="SUCCESS"
condition="$(curl -X GET "$1" | jq -r '.task.status')"
while [ "$condition" != "$success" ]
do
ANALYSIS_Id="$(curl -X GET "$1" | jq -r '.task.analysisId')"
done
return "$ANALYSIS_Id"
Related
so first off, I'm not sure if I'm doing something wrong or if there are bugs that need to be worked out in the go-task project on GitHub.
I'm running into issues where occasionally I run into an error that looks something like this:
context canceled
task: Failed to run task "start": task: Failed to run task "upstream:common": task: Failed to run task "upstream:common:merge:variables": task: Failed to run task "upstream:common:merge:variables:subtype": fork/exec /usr/bin/jq: bad file descriptor
Some sample code that causes this issue is:
function mergePackages() {
# Merge the files
TMP="$(mktemp)"
jq --arg keywords "$(jq '.keywords[]' "$1" "$2" | jq -s '. | unique')" -s -S '.[0] * .[1] | .keywords = ($keywords | fromjson) | .' "$1" "$2" > "$TMP"
mv "$TMP" "$3"
}
for FOLDER in project-*/; do
SUBTYPE="$(echo "$FOLDER" | sed 's/project-\(.*\)\//\1/')"
mergePackages "./.common/package.hbs.json" "project-$SUBTYPE/package.hbs.json" "./.common-$SUBTYPE/package.hbs.json" &
done
wait
The issue seems to occur most often when I'm adding & to the end of a bash command for an inline function to execute although I have seem it happen even if a function is called without &.
Is there anything I am doing wrong?
I have a hash file containing several md5 hashes.
I want to create a bash script to curl virustotal to check if the hashes are known.
#!/bin/bash
for line in "hash.txt";
do
echo $line; curl -s -X GET --url 'https://www.virustotal.com/vtapi/v2/file/report?apikey=a54237df7c5c38d58d2240xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcc0a0d7&resource='$line'';
done
but not working.
Could you help me please?
Better use a while loop. Your for loop would only run once, because bash interpret it as a value, not a file. Try this:
while read -r line; do
echo "$line"
curl -s -X GET --url "https://www.virustotal.com/vtapi/v2/file/report?apikey=a54237df7c5c38d58d2240xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcc0a0d7&resource=$line"
done <"/path/to/hash.txt"
I am trying to create a GitLab CI/CD job that takes a bash script from a YAML file and checks if the syntax is correct.
I have a YAML file called .gitlab-ci.template.yml with the following content:
image: node:10.15.3-alpine
stages:
- deploy
deploy:
stage: deploy
script:
- apk_install
- yarn_install
- build
- deploy
.bash: &bash |
function apk_install() {
apk add jq git curl grep
}
function yarn_install() {
yarn
}
function build() {
echo "Build"
if [ -f ".gitlab-ci.template.yml" ]; then
echo "Just a dumb test for more lines"
fi
}
function deploy() {
echo "Deploy"
}
before_script:
- *bash
I would like to take the bash part and test it. I installed shyaml to get values from the YAML file, like so:
FILE=`cat .gitlab-ci.template.yml | shyaml get-value before_script`
The contents of $FILE would be:
- "function apk_install() {\n apk add jq git curl grep\n}\n\nfunction yarn_install()\ \ {\n yarn\n}\nfunction build() {\n echo \"Build\"\n \n if [ -f \".gitlab-ci.template.yml\"\ \ ]; then\n echo \"Just a dumb test for more lines\"\n fi\n}\n\nfunction deploy()\ \ {\n echo \"Deploy\"\n}\n"
Then with some the following command I try to get a valid file again:
echo $FILE | xargs -0 printf '%b\n' | sed 's/\\[[:space:]]\\[[:space:]]/ /g' | sed 's/\\"/"/g' | sed 's/\\[[:space:]]//g'
Now I can remove some chars from the beginning and end. But I was wondering would there be any better/easier way?
Have you considered using yq?
Here's an expression to get the shell script out of your file:
yq -r '.[".bash"]' .gitlab-ci.template.yml
The result is the actual script. Now, you just need to pipe it to a bash linter. I looked through this site for you, but I couldn't find the bash syntax parser that is often sited in the bash posts (YMMV). I did find ShellChecker via google, but I didn't evaluate thoroughly (so, use as you see fit).
Ultimately, your code may look like this:
yq -r '.[".bash"]' .gitlab-ci.template.yml | ShellChecker # or use your favorite bash linter
To lint shell script embedded in GitLab CI YAML, a combination of git, yq, and shellcheck does the trick as demonstrated by this GitLab CI job:
shellcheck:
image: alpine:3.15.0
script:
- |
# shellcheck shell=sh
# Check *.sh
git ls-files --exclude='*.sh' --ignored -c -z | xargs -0r shellcheck -x
# Check files with a shell shebang
git ls-files -c | while IFS= read -r file; do
if head -n1 "${file}" |grep -q "^#\\! \?/.\+\(ba|d|k\)\?sh" ; then
shellcheck -x "${file}"
fi
done
# Check script embedded in GitLab CI YAML named *.gitlab-ci.yml
newline="$(printf '\nq')"
newline=${newline%q}
git ls-files --exclude='*.gitlab-ci.yml' --ignored -c | while IFS= read -r file; do
yq eval '.[] | select(tag=="!!map") | (.before_script,.script,.after_script) | select(. != null ) | path | ".[\"" + join("\"].[\"") + "\"]"' "${file}" | while IFS= read -r selector; do
script=$(yq eval "${selector} | join(\"${newline}\")" "${file}")
if ! printf '%s' "${script}" | shellcheck -; then
>&2 printf "\nError in %s in the script specified in %s:\n%s\n" "${file}" "${selector}" "${script}"
exit 1
fi
done
done
The job will check script, after_script, and before_script which are all the places where shell script can appear in GitLab CI YAML. It searches for GitLab CI YAML files matching the naming convention *.gitlab-ci.yml.
For convenience, this job also checks *.sh and files that start with a shell shebang.
I also posted this information to my site at https://candrews.integralblue.com/2022/02/shellcheck-scripts-embedded-in-gitlab-ci-yaml/
Just noticed I should get the first occurence from before_script with shyaml. So I changed:
FILE=`cat .gitlab-ci.template.yml | shyaml get-value before_script`
to:
FILE=`cat .gitlab-ci.template.yml | shyaml get-value before_script.0`
Which will get the script as-is.
Depending on certain conditions I want to use JWT else I want to provide path to certs. Thus in my shell script this is the code:
if /* some condition */
authorization='-H "'Authorization': 'Bearer' ${JWT}"'
else
authorization="--cert "${ADMIN_CERT_PATH}" --key "${ADMIN_KEY_PATH}""
Now the curl request should be:
curl -H "Authorization: Bearer 348129" for if condition
curl --cert /Users/.../admin_cert --key /Users/../admin_key .. for else path
In order to get that output I need to use the following format in my shell script for if condition
response_code="$(curl -s -o /dev/null -w "%{http_code}" "$authorization" "$status_url")"
and following format for else code:
response_code="$(curl -s -o /dev/null -w "%{http_code}" $authorization "$status_url")"
Note:
I need $authorization variable quoted in first case and unquoted in the else case.
I do not want to write 2 different curl commands instead reuse the authorization variable.
Thus, i need to modify the way I have declared my authorization variable such that I can write any one of the curl commands only once which works for both if and else cases.
curl supports a way to pass command line parameters in a file that I have used before when I have complex parameters. The idea is to place the complex command-line parameters into a simple text file and instruct curl to read parameters from it using --config parameter.
In this case the shell script would look something like the following.
#!/bin/sh
## "safely" create a temporary configuration file
curlctl=$(mktemp -q -t $(basename "$0"))
if test $? -ne 0
then
echo "$0: failed to create temporary file, exiting."
exit 75 # EX_TEMPFAIL
fi
trap 'rm "$curlctl"' 0
## write parameters used in all cases
cat>>"$curlctl" <<EOF
output = /dev/null
silent
write-out = %{http_code}
EOF
## append conditional parameters
if test "$some" = 'condition'
then
printf 'header = "Authorization: Bearer %s"\n' "$JWT" >> "$curlctl"
else
echo "cert = $ADMIN_CERT_PATH" >> "$curlctl"
echo "key = $ADMIN_KEY_PATH" >> "$curlctl"
fi
# uncomment to see what the config file looks like
# cat "$curlctl" | sed 's/^/curl config: /'
response_code=$(curl --config "$curlctl" http://httpbin.org/get)
echo "response code: $response_code"
The first few lines set up a temporary file that is deleted when the shell script exits. If you are already using trap then your cleanup will probably be more complex.
When you are using a shell that supports arrays, you can avoid the need for a temporary configuration file.
curl_opts=(-s -o /dev/null -w "%{http_code}")
if /* some condition */
curl_opts+=(-H "Authorization: Bearer $JWT")
else
curl_opts+=(--cert "$ADMIN_CERT_PATH" --key "$ADMIN_KEY_PATH")
fi
...
response_code="$(curl "${curl_opts[#]}" "$status_url")"
I have a json file, with entries containing urls (among other things), which i retrieve using curl.
I'd like to be able to run the loop several times at once to go faster, but also to have a limitation of the number of parallel curls, to avoid being kicked out by the distant server.
For now, my code is like
jq -r '.entries[] | select(.enabled != false) | .id,.unitUrl' $fileIndexFeed | \
while read unitId; do
read -r unitUrl
if ! in_array tabAnnoncesExistantesIds $unitId; then
fullUnitUrl="$unitUrlBase$unitUrl"
unitFile="$unitFileBase$unitId.json"
if [ ! -f $unitFile ]; then
curl -H "Authorization:$authMethod $encodedHeader" -X GET $fullUnitUrl -o $unitFile
fi
fi
done
If i use a simple & at the end of my curl, it will run lots of concurrent requests, and i could be kicked.
So, the question would be (i suppose) : how to know that a curl runned with an & has finished its job ? If i'm able to detect that, then i guess that i can test, increment and decrement a variable telling the number of running curls.
Thanks
Use GNU Parallel to control the number of parallel jobs. Either write your curl commands to a file so you can look at them and check them:
commands.txt
curl "something" "somehow" "toSomewhere"
curl "somethingelse" "someotherway" "toSomewhereElse"
Then, if you want no more than 8 jobs running at a time, run:
parallel -j 8 --eta -a commands.txt
Or you can just write the commands to GNU Parallel's stdin:
jq ... | while read ...; do
printf "curl ..."
done | parallel -j 8
Use a Bash function:
doit() {
unitId="$1"
unitUrl="$2"
if ! in_array tabAnnoncesExistantesIds $unitId; then
fullUnitUrl="$unitUrlBase$unitUrl"
unitFile="$unitFileBase$unitId.json"
if [ ! -f $unitFile ]; then
curl -H "Authorization:$authMethod $encodedHeader" -X GET $fullUnitUrl -o $unitFile
fi
fi
}
jq -r '.entries[] | select(.enabled != false) | .id,.unitUrl' $fileIndexFeed |
env_parallel -N2 doit
env_parallel will import the environment, so all shell variables are available.