Problem with passing parameters in a shell function - bash

I have the following shell script. There is a deploy function, that is used later in the script to deploy containers.
function deploy {
if [[ $4 -eq QA ]]; then
echo Building a QA Test container...
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-e SPECIAL_ENV_VARIABLE \
-p $2:9696 \
... # skipping other project specific settings
else
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-p $2:9696 \
... # skipping some project specific stuff
fi
During the deployment, I have to do some tests on the application (which is in containers). I use different containers for that, however I need to pass one additional parameter to the deploy function for my QA_test container, because I need a different setting in the docker create. Hence, why I put the if statement in the begining, which checks if the 4th argument equals 'QA', and if it does it creates a specific container with special env variables, otherwise if it has just the 3 arguments, it creates a 'normal' one. I was able to run the code with two seperate deploy functions, but I want to make my code better for readabiity. Anyway, this is how it should go:
Step 1: Normal tests:
deploy container_test 9696 test_database # 3 parameters
run tests... (this is not relevant to the question)
Step 2: QA testing:
deploy container_qa_test 9696 test_database QA # 4 parameters, so I can create a
# a special container
run tests... (again, not relevant to the question)
Step 3: If they are successful, deploy a production-ready container:
deploy production_container 9696 production_database # 3 parameters again
However what happens according to the log:
Step 1: test_container is created. However its created with the upper if, but there is not a 4th parameter that equals QA, however it executes it.
Step 2: this runs normal.
Step 3: production container is built as a QA container
It never reaches the else part, even if the condition is not satisfied. Can anyone give me some tips?

just change [[ $4 -eq QA ]] to :
if [[ "$4" == "QA" ]]; then
-eq used to comapre numbers ....

Related

problem with snakemake submitting jobs with multiple wildcard on SGE

I used snakemake on LSF cluster before and everything worked just fine. However, recently I migrated to SGE cluster and I am getting a very strange error when I try to run a job with more than one wildcard.
When I try to submit a job based on this rule
rule download_reads :
threads : 1
output : "data/{sp}/raw_reads/{accesion}_1.fastq.gz"
shell : "scripts/download_reads.sh {wildcards.sp} {wildcards.accesion} data/{wildcards.sp}/raw_reads/{wildcards.accesion}"
I get a following error (snakemake_clust.sh details bellow)
./snakemake_clust.sh data/Ecol1/raw_reads/SRA123456_1.fastq.gz
Building DAG of jobs...
Using shell: /bin/bash
Provided cluster nodes: 10
Job counts:
count jobs
1 download_reads
1
[Thu Jul 30 12:08:57 2020]
rule download_reads:
output: data/Ecol1/raw_reads/SRA123456_1.fastq.gz
jobid: 0
wildcards: sp=Ecol1, accesion=SRA123456
scripts/download_reads.sh Ecol1 SRA123456 data/Ecol1/raw_reads/SRA123456
Unable to run job: ERROR! two files are specified for the same host
ERROR! two files are specified for the same host
Exiting.
Error submitting jobscript (exit code 1):
Shutting down, this might take some time.
When I replace the sp wildcard with a constant, it works as expected:
rule download_reads :
threads : 1
output : "data/Ecol1/raw_reads/{accesion}_1.fastq.gz"
shell : "scripts/download_reads.sh Ecol1 {wildcards.accesion} data/Ecol1/raw_reads/{wildcards.accesion}"
I.e. I get
Submitted job 1 with external jobid 'Your job 50731 ("download_reads") has been submitted'.
I wonder why I might have this problem, I am sure I used exactly the same rule on the LSF-based cluster before without any problem.
some details
The snakemake submitting script looks like this
#!/usr/bin/env bash
mkdir -p logs
snakemake $# -p --jobs 10 --latency-wait 120 --cluster "qsub \
-N {rule} \
-pe smp64 \
{threads} \
-cwd \
-b y \
-o \"logs/{rule}.{wildcards}.out\" \
-e \"logs/{rule}.{wildcards}.err\""
-b y makes the command executed as it is, -cwd changes the working directory on the computing node the the working directory from where the job was submitted. Other flags / specifications are clear I hope.
Also, I am aware of --drmaa flag, but I think out cluster is not well configured for that. --cluster was till now a more robust solution.
-- edit 1 --
When I execute exactly the same snakefile locally (on the fronend, without the --cluster flag), the script gets executed as expected. It seems to be a problem of interaction of snakemake and the scheduler.
-o \"logs/{rule}.{wildcards}.out\" \
-e \"logs/{rule}.{wildcards}.err\""
This is a random guess... More than one wildcards are concatenated by space before replacing them into logs/{rule}.{wildcards}.err. So despite you use double quotes, SGE treats the resulting string as two files and throws the error. What if you use single quotes instead? Like:
-o 'logs/{rule}.{wildcards}.out' \
-e 'logs/{rule}.{wildcards}.err'
Alternatively, you could concatenate the wildcards in the rule and use the result on the command line. E.g.:
rule one:
params:
wc= lambda wc: '_'.join(wc)
output: ...
Then use:
-o 'logs/{rule}.{params.wc}.out' \
-e 'logs/{rule}.{params.wc}.err'
(This second solution, if it works, kind of sucks though)

Variable expansion of trigger branch property prevents downstream pipeline from being created

A branch job in which the branch property of the trigger property is using a variable will always fail with reason: downstream pipeline can not be created.
Steps to reproduce
Set up a downstream pipeline with a trigger property as you would normally.
Add a branch property to the trigger property. Write the name of an existing branch on the downstream repository, like master/main or the name of a feature branch.
Run the pipeline and observe that the downstream pipeline is successfully created.
Now change the branch property to use a variable instead, like branch: $CI_TARGET_BRANCH.
Manually run the CI pipeline with that, setting variable through the GitLab GUI.
The job will instantly fail with reason: downstream pipeline can not be created.
Code example
The goal is to create a GitLab CI config that runs the pipeline of a specified downstream branch. The bug occurs when attempting to do it with a variable.
This works, creating a downstream pipeline like normal. But the branch name is hardcoded:
stages:
- deploy
deploy:
variables:
environment: dev
stage: deploy
trigger:
project: group/project
branch: foo
strategy: depend
This does not work; although TARGET_BRANCH is set successfully, the job fails because the downstream pipeline can not be created:
stages:
- removeme
- deploy
before_script:
- if [ -z "$TARGET_BRANCH" ]; then TARGET_BRANCH="main"; fi
- echo $TARGET_BRANCH
test_variable:
stage: removeme
script:
- echo $TARGET_BRANCH
deploy:
variables:
environment: dev
stage: deploy
trigger:
project: group/project
branch: $TARGET_BRANCH
strategy: depend
If you know what I'm doing wrong, or you have something that does work with variable expansion of the branch property, please share it (along with your GitLab version). Alternate solutions are also welcome, but this one seems like it should work.
GitLab Version on which bug occurs
Self-hosted GitLab Community Edition 12.10.7
What is the current bug behavior?
The job always fails for reason: downstream pipeline can not be created.
What is the expected correct behavior?
The branch property should be set to the value of the variable and the downstream pipeline should be created as normal, just as if you simply hardcoded/typed the name of the branch.
More details
The ability to use variable expansion in the trigger branch property was added in v12.4, and it's explicitly mentioned in the docs.
I searched for other .gitlab-ci.yml / GitLab config files. Every single one that attempted to use variable expansion in the branch property had it commented out, saying it was bugged for an unknown reason (example.
I haven't been able to find a repository in which someone claimed to have a working variable expansion for the branch property of the trigger property.
Unfortunately, the alternate solutions are either (a) hardcoding every downstream branch name into the GitLab CI config of the upstream project, or (b) not being able to test changes to the downstream GitLab CI config without first committing them to master/main, or having to use only/except.
TL;DR: How to use the value of a variable for the branch property of a bridge job? My current solution makes it so the job fails and the downstream pipeline isn't created.
this is a 'works as designed', and gitlab will improve in upcoming releases.
trigger job will pretty weak b/c it is not a full job that runs on a runner. Therefore most of the trigger configuration needs to be hardcoded.
I use direct API calls to trigger downstream jobs passing the CI_JOB_TOKEN which links the upstream job to downstream as the trigger does
API calls give you full control
curl -X POST \
-s \
-F token=${CI_JOB_TOKEN} \
-F "ref=${REF_NAME}" \
-F "variables[STAGE]=${STAGE}" \
"${CI_SERVER_URL}/api/v4/projects/${CI_PROJECT_ID}/trigger/pipeline"
now this will not wait and monitor for when the job is done so you will need to code for that if you need to wait for the downstream job to finish,
Moreover, CI_JOB_TOKEN cannot be used to get the status of the downstream job, so you will another token for that.
- |
DOWNSTREAM_RESULTS=$( curl --silent -X POST \
-F token=${CI_JOB_TOKEN} \
-F "ref=${DOWNSTREAM_PROJECT_REF}" \
-F "variables[STAGE]=${STAGE}" \
-F "variables[SLS_PACKAGE_PATH]=.serverless-${STAGE}" \
-F "variables[INVOKE_SLS_TESTS]=false" \
-F "variables[UPSTREAM_PROJECT_REF]=${CI_COMMIT_REF_NAME}" \
-F "variables[INSTALL_SLS_PLUGINS]=${INSTALL_SLS_PLUGINS}" \
-F "variables[PROJECT_ID]=${CI_PROJECT_ID}" \
-F "variables[PROJECT_JOB_NAME]=${PROJECT_JOB_NAME}" \
-F "variables[PROJECT_JOB_ID]=${PROJECT_JOB_ID}" \
"${CI_SERVER_URL}/api/v4/projects/${DOWNSTREAM_PROJECT_ID}/trigger/pipeline" )
echo ${DOWNSTREAM_RESULTS} | jq .
DOWNSTREAM_PIPELINE_ID=$( echo ${DOWNSTREAM_RESULTS} | jq -r .id )
echo "Monitoring Downstream pipeline ${DOWNSTREAM_PIPELINE_ID} status..."
DOWNSTREAM_STATUS='running'
COUNT=0
PIPELINE_API_URL="${CI_SERVER_URL}/api/v4/projects/${DOWNSTREAM_PROJECT_ID}/pipelines/${DOWNSTREAM_PIPELINE_ID}"
echo "Pipeline api endpoint => ${PIPELINE_API_URL}"
while [ ${DOWNSTREAM_STATUS} == "running" ]
do
if [ $COUNT -eq 0 ]
then
echo "Starting loop"
fi
if [ ${COUNT} -ge 350 ]
then
echo 'TIMEOUT!'
DOWNSTREAM_STATUS="TIMEOUT"
break
elif [ $(( ${COUNT} % 60 )) -eq 0 ]
then
echo "Downstream pipeline status => ${DOWNSTREAM_STATUS}"
echo "Count => ${COUNT}"
sleep 10
else
sleep 10
fi
DOWNSTREAM_CALL=$( curl --silent --header "PRIVATE-TOKEN: ${GITLAB_TOKEN}" ${PIPELINE_API_URL} )
if [ $COUNT -eq 0 ]
then
echo ${DOWNSTREAM_CALL} | jq .
fi
DOWNSTREAM_STATUS=$( echo ${DOWNSTREAM_CALL} | jq -r .status )
COUNT=$(( ${COUNT} + 1 ))
done
#pipeline status is running, failed, success, manual
echo "PIPELINE STATUS => ${DOWNSTREAM_STATUS}"
if [ ${DOWNSTREAM_STATUS} != "success" ]
then
exit 2
fi

Passing a parameter's value to shell function prints only the name of the parameter

I need to pass a parameter to my shell function, which looks like this:
function deploy {
docker create \
--name=$1_temp \
-e test_postgres_database=$2 \
-e test_publicAddress="http://${3}:9696"\
# other irrelevant stuff
I am passing the following parameters:
deploy test_container test_name #1 test_database #2 ip_address #3
So when, I pass those 3 parameters, based on them a new container is created. However, the third parameter is something special. So there is another function, which gets the ip of the container.
function get_container_ip_address {
container_id=($(docker ps --format "{{.ID}} {{.Names}}" | grep $1))
echo $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${container_id[0]})
So, the execution of the deploy function actually looks like this:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database ip_address
Let's say the IP address of the container is 1.1.1.1, so the ip_address=1.1.1.1.
However, when I execute the script and create the container, its IP address is:
"http://ip_address:9696" and not "http://1.1.1.1:9696".
I also tried the following:
...
-e test_publicAddress="http://$3:9696"\
...
But I still got the same result. Is there a way I can get the value of the passed parameter? By the way, I am sure it contains the needed ip address as I use it elsewhere (not in a function) and I printed it for testing. Thank you in advance!
so run this like that:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database $ip_address
when you call it without the $ the script leave it alone like a string ip_address

How can we remove a request parameter once (at startup) from a JMeter Test Plan?

I am using jmeter from command line to perform an automated suite of tests on some targets.
It looks like this :
for files in ./*.jmx; do
./jmeter \
-n -t ${file}/Test_perfs_qgis_SHORT.jmx \
-l ${TEST_DIR_PATH}/at_bench.log \
-e \
-o ${TEST_DIR_PATH}/report \
-J TEST_DIR_PATH="${TEST_DIR_PATH}" \
-J COMMON_PARAM="someValue" \
-J ANOTHER_COMMON_PARAM="anotherValue" \
-J SPECIFIC_PARAM="someValue Or emptyIfNotExpected"
fi
Most of the targets share the same GET template, or at least allow unexpected parameter (that will then just be ignored).
But some target fail when they receive extra parameter.
Thus, I added a PreProcessor to remove the parameter when their value is not provided.
if((vars.get("SPECIFIC_PARAM") == null)||(vars.get("SPECIFIC_PARAM")=="")){
sampler.getArguments().removeArgument("MAP");
}
And this works well. But Since I have around 50000 call, this will be triggered... a few times!
Considering this is for testing purpose, I fear this may have impact on results (though this may also be quite the same for all request).
Anyway, I am trying to find a way to remove it at startup : once for all requests.
Anyone has a tip on how to do it?
Considering what you are removing (an argument of the sampler), it cannot be removed elsewhere/globally. Maybe you could instead have 2 templates: one with and one without that parameter, and select the template with If controller base don value of the variable:
If Controller with condition: "${SPECIFIC_PARAM}"==""
Sampler without MAP argument
If Controller with condition: "${SPECIFIC_PARAM}"!=""
Sampler with MAP argument

Protractor - run test again if fails

I'm using shell script to run protractor tests.
I want to make sure that if the test fails (exit code != 0) then it will run again - three times most.
I'm already using Teamcity, but Teamcity sends the 'FAIL' email and only then tries again. I want the test will run three times before sending a message.
this is part of my script:
if [ "$#" -eq 0 ];
then
/usr/local/bin/protractor proactor-config.js --suite=sanity
now I want to somehow check whether the Exit Code was 0 and of not - run again.
Thanks.
I wrote a small module to do this called protractor flake. It can be used via the cli
# defaults to 3 attempts
protractor-flake -- protractor.conf.js
Or programatically.
One nice thing here is that it will only re-run failed spec files, instead of your test suite.
There is a long standing feature request for this in the protractor issue queue. It probably won't be baked into the core of the framework.
function to check status
function test {
"$#"
local status=$?
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
test command1
test command2
If you use protractor with cucumber-js, you can choose to give each scenario (or all scenarios tagged as unstable) a number of retries:
./node_modules/cucumber/bin/cucumber-js --help
...
--retry <NUMBER_OF_RETRIES> specify the number of times to retry failing test cases (default: 0) (default: 0)
--retryTagFilter <EXPRESSION> only retries the features or scenarios with tags matching the expression (repeatable).
This option requires '--retry' to be specified. (default: "")
Unfortunately if every failed scenario has been successfully retried, Protractor will still return with exit code 1:
https://github.com/protractor-cucumber-framework/protractor-cucumber-framework/issues/176
As a workaround, when starting the protractor I append the following to its command line:
const directory = 'build';
ensureDirSync(directory);
const cucumberSummary = join(directory, 'cucumberSummary.log');
protractorCommandLine += ` --cucumberOpts.format=summary:${cucumberSummary} \
|| grep -P "^(\\d*) scenarios? \\(.*?\\1 passed" ${cucumberSummary} \
&& rm ${cucumberSummary}`;

Resources