How to get master and slave version gitlab-ci.yml - yaml

I have my pippeline where:
- echo 'Komponente;Version' >> $VERSION_CSV
- echo 'Master; \"$MASTER_VERSION\"' >> $VERSION_CSV
- echo 'Slave; \"$SLAVE_VERSION\"' >> $VERSION_CSV
- 'eval "$DEPLOY_CURL_COMMAND_4"'
but in the output displays this and not the version:
someone know how to display the version or tell me what am I doing wrong?

This has nothing to do with Gitlab CI, what you are doing is Shell-Scripting and your problem is in Shell-Skripting. As you didn't write which Shell we can only guess. For Bash this should give you the right result:
echo "Master; $MASTER_VERSION" >> $VERSION_CSV
Pro-Tip: Don't show your real paths on Stackoverflow.

Related

How to use an anchor to prevent repetition of code sections?

Say I have a number of jobs that all do similar series of scripts, but need a few variables that change between them:
test a:
stage: test
tags:
- a
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t a -f Dockerfile.a .
test b:
stage: test
tags:
- b
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t b -f Dockerfile.b .
All I need is to be able to define e.g.
- docker build -t ${WHICH} -f Dockerfile.${which} .
If only I could make an anchor like:
.x: &which_ref
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
And include it there:
test a:
script:
- export WHICH=a
<<: *which_ref
This doesn't work and in a yaml validator I get errors like
Error: YAMLException: cannot merge mappings; the provided source object is unacceptable
I also tried making an anchor that contains some entries under script inside of it:
.x: &which_ref
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
This means I have to include it from one step higher up. So this does not error, but all this accomplishes is cause the later declared script section to override the first one.
So I'm losing hope. It seems like I will just need to abstract the sections away into their own shell scripts and call them with arguments or whatever.
The YAML merge key << is a non-standard extension for YAML 1.1, which has been superseded by YAML 1.2 about 14 years ago. Usage is discouraged.
The merge key works on mappings, not on sequences. It cannot deep-merge. Thus what you want to do is not possible to implement with it.
Generally, YAML isn't designed to process data, it just loads it. The merge key is an outlier and didn't find its way into the standard for good reasons. You need a pre- or postprocessor to do complex processing, and Gitlab CI doesn't offer anything besides simple variable expension, so you're out of luck.

How to get branch name in a Github Action Shell script

I'm trying to create an output to use later in the job.
However, for some reason, the BRANCH env variable which I'm getting to be the GITHUB_REF_NAME is an empty string, which according to the docs, should be the branch.
Also using the variable directly produces the same result.
- name: Set Terraform Environment Variable
id: set_tf_env
env:
BRANCH: ${{env.GITHUB_REF_NAME}}
run: |
if [ "$BRANCH" == "dev" ]; then
run: echo "::set-output name=TF_ENV::dev"
elif [ "$BRANCH" == "prod" ]; then
run: echo "::set-output name=TF_ENV::prod"
else
echo "Branch has no environment"
exit 1
fi
So after a bit of more research and thanks to the comments, I discovered the reason why it wasn't working.
It was because I was triggering a GitHub action in a Pull Request, something I failed to mention.
So what I ended up using was:
github.event.pull_request.head.ref

How do I assign exe output to a variable in gitlab ci scripts?

When running my gitlab ci I need to check whether a specified svn directory exists.
I was using the script:
variables:
DIR_CHECK: "default"
stages:
- setup
- test
- otherDebugJob
.csharp:
only:
changes:
- "**/*.cs"
- "**/*.js"
setup:
script:
- $DIR_CHECK = $(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo $DIR_CHECK
test:
script:
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK == ''
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK != ''
the svn command works and echos back the correct reply but $DIR_CHECK does not get set to anything but the original default. It does not store the returned string from the svn command.
How do I store the returned string from an exe in a variable in gitlab ci?
Test run:
Executing "step_script" stage of the job script 00:00 $ $DIR_CHECK =
$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal
--depth empty) svn: E170000: Illegal repository URL https://server.fsl.local:port/svn/myco/personal/TestNotReal' $ echo
$DIR_CHECK Cleaning up file based variables 00:01 Job succeeded
Passing variables between jobs
Unfortunately, you cannot use DIR_CHECK variable the way you described. List of steps to be executed generates before steps actually runs, that means for all of the steps DIR_CHECK will be equal to default. First of all there are few tips how you can pass variables between jobs:
First way
You can add desired command to the before_script section in your .csharp template:
.csharp:
before_script:
- export DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
and extend other steps with this .csharp.
Second way
You can pass variables between jobs with job artifacts:
setup:
stage: setup
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo "DIR_CHECK=$DIR_CHECK" > dotenv_file
artifacts:
reports:
dotenv:
- dotenv_file
Thirds way
You can trigger or use parent/child pipelines to pass variables into pipelines.
staging:
variables:
DIR_CHECK: "you are awesome, guys!"
stage: deploy
trigger: my/deployment
In the triggered pipeline your variable will exists at the very start moment, and all the rules will be applied correctly.
Solution
In your case, if you really don't want to include otherDebugJob step in your pipeline you can do the following:
First approach
This is quite easy way and this will work, but looks like not a best practice. So, we are already know how to pass our DIR_CHECK variable from setup step , just add some check in the test step script block:
script:
- |
if [ -z "$DIR_CHECK" ]; then
exit 0
fi
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
Do the almost same thing for the otherDebugJob but check if DIR_CHECK is not empty with if [ -n "$DIR_CHECK" ].
This approach is helpful when your pipeline not contains a lot of steps, but after the test and otherDebugJob follows another few steps.
Second approach
You can fail your setup step and then handle this fail in otherDebugJob step:
setup:
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- |
if [ -z "$DIR_CHECK" ]; then
exit 1
fi
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
when: on_failure
This approach is useful if you only want to make some debug stuff after this setup step. After all on_failure jobs, pipeline will be marked as Failed and stopped.

Output from running Matlab (Linux) as a Cron job in Bash includes many ">>" in the email

I am running a Matlab script on Linux (RedHat Enterprise Linux RHEL 7.6, 64-bit) as a cron job. I am not admin on that machine, therefore, I use crontab -e to schedule the job. The installed version of Matlab is 2018b. The email which I recieve upon execution includes a couple of >> at the beginning and end which I find a bit irritating.
Here, an example of the email:
MATLAB is selecting SOFTWARE OPENGL rendering.
< M A T L A B (R) >
Copyright 1984-2018 The MathWorks, Inc.
R2018b (9.5.0.944444) 64-bit (glnxa64)
August 28, 2018
To get started, type doc.
For product information, visit www.mathworks.com.
>> >> >> >>
Matlab started: 2020-07-31 21:50:26.
>> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >>
Going to update from 2015-01-01 00:00:00 UTC to 2015-12-31 23:00:00 UTC.
[...]
>> Matlab closes: 2020-07-31 23:26:41.
>>
The corresponding lines at the beginning of the Matlab script look exactly like this:
close all
clearvars
% profile on % to check performance
fprintf('\nMatlab started: %s.\n', char(datetime()))
%% Database user parameters
% connects always to the soecified database on "localhost"
DB_conn_name = 'abc';
DB_username = 'def';
DB_password = 'ghi';
% Add path and subfolders
if isunix
addpath(genpath('/project/abc'));
elseif ispc
addpath(genpath('C:\Branches\abc'));
end
% Change working folder
if isunix
cd /project/abc
elseif ispc
cd C:\Branches\abc
end
% Add database driver to path
javaaddpath JDBC_driver/mysql-connector-java.jar % Forward slashes within Matlab work even on Windows
% Set default datetime format
datetime.setDefaultFormats('default','yyyy-MM-dd HH:mm:ss')
%% Begin and end of update period
% now_UTC = datetime('now','TimeZone','UTC');
% time_2 = datetime(now_UTC.Year, now_UTC.Month, now_UTC.Day-1, 22, 0, 0); % Set the end time not too late, otherwise, some data might not yet be available for some areas leading to ugly "dips" in Power BI.
% During each update, we update e.g. the past 30 days
% datetime_month_delay = time_1 - days(30);
% Override automatic dates obtained below, for testing purposes
% time_1 = datetime(2020,1,1,0,0,0);
% time_2 = datetime(2020,2,1,23,0,0);
% Updating several years, one at a time
for iYear = 2015:2019
time_1 = datetime(iYear,1,1,0,0,0);
time_2 = datetime(iYear,12,31,23,0,0);
fprintf(['\nGoing to update from ',char(time_1),' UTC to ',char(time_2),' UTC. \n'])
[...]
Looks as though each row that is outside the for loop produces an empty line and therefore such a >> prompt in the output. Also visible at the end (not included here).
The crontab -e looks like the following:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=<my email address>
HOME=/project/abc
HTTP_PROXY=<proxy address>:8086
HTTPS_PROXY=<proxy address>:8086
# Run script regularly: minute hour day month dayofweek command
# No linebreaks allowed
15 2 * * * ~/script.sh
The shell script script.sh looks like this:
#!/bin/bash
/prog/matlab2018b/bin/matlab -nodesktop < ~/git-repos/abc/matlabscript.m
Does anyone have an idea what I need to change to get rid of these >>? That would be great! Thanks in advance!
The -nodesktop flag is still giving you an interactive shell, which is why crontab is capturing the prompts at all. You need to tell the matlab command what statement to execute.
I know you are using R2018b; but, I am going to give you BOTH answers for before and after R2019a, in case you ever upgrade.
For both answers: Because you called this in your crontab, make sure to use full path for your MATLAB executable for security reasons; and, it would be good to make sure you use the -sd flag as well so that your statement to execute is first in the path. The statement to execute is to be typed the same way you would type it on the MATLABcommand line.
Before R2019a: Per the doc page for the R2018b matlab (Linux) command, you need to run your command with the -r and -sd flags together. The -sd flag specifies your startup directory. Also, your code needs to have an exit statement at the end so that the matlab executable knows its done.
/path/before_R2019a/matlab -sd /path/startup_directory -b statement
Starting in R2019a, the -batch flag in your invocation of MATLAB is the recommended way to run automated jobs like this, per the matlab (Linux) command doc page
Note that starting in R2019a, the -r flag is NOT recommended; and, it should NOT be used with the -batch flag.
The -batch flag is simpler to use, and was added to make automation tasks easier. For starters, you no longer need to have an exit statement in your code with this approach.
Also remember that if you need quotes, starting in R2016b, MATLAB handles both double and single quoted strings. Choose appropriately in your script or cron call to handle your linux shell replacements - or avoid them.
/path/R2019a+/matlab -sd /path/startup_directory -b statement
As an added bonus, if you use the -batch flag, you can tell from inside your script whether it is running from a -batch call or interactively using the MATLAB variable batchStartupOptionUsed.

if else statement incorrect output

I'm working on a custom Nagios script that will monitor cPanel to make sure it is running and give back a status depending on what it gets from an output of service cpanel status. This is what I have:
##############################################################################
# Constants
cpanelstate="running..."
ALERT_OK="OK - cPanel is running"
ALERT_CRITICAL="CRITICAL - cPanel is NOT running"
###############################################################################
cpanel=$(service cpanel status | head -1)
if [ "$cpanel" = "$cpanelstate" ]; then
echo $ALERT_OK
exit 0
else
echo $ALERT_CRITICAL
exit 2
fi
exit $exitstatus
When I run the script, this is the output I get:
root#shared01 [/home/mvelez]# /usr/local/nagios/libexec/check_cpanel
CRITICAL - cPanel is NOT running
When I run the script, cPanel IS RUNNING but this is the output I get. As a matter of fact, no matter what the status reports for cPanel this is the output that comes out. When I comment out the ELSE, ECHO and EXIT 2 statement:
#else
# echo $ALERT_CRITICAL
# exit 2
It gives back a blank output:
root#shared01 [/home/mvelez]# /usr/local/nagios/libexec/check_cpanel
root#shared01 [/home/mvelez]#
I'm not sure what I'm not doing correctly as I am very new to bash scripting and trying to learn as I go along. Thank you in advanced for any and all help very very much!
The code below should work, but you might need to run it with sudo, because 'service' might not be available for ordinary users.
#!/bin/bash
##############################################################################
# Constants
cpanelstate="running"
ALERT_OK="OK - cPanel is running"
ALERT_CRITICAL="CRITICAL - cPanel is NOT running"
###############################################################################
cpanel=$(service apache2 status | head -1)
echo CPANEL $cpanel
if [[ $cpanel == *$cpanelstate* ]]; then
echo $ALERT_OK
exit 0
else
echo $ALERT_CRITICAL
exit 2
fi
#Oleg Gryb's answer solves your problem, but as for why your original script didn't work:
[ "$cpanel" = "$cpanelstate" ] compared the full command output - e.g., cpsrvd (pid 10066) is running..., against a substring of the expected output, running... for equality, which will obviously fail.
The solution is to use bash's pattern matching, provided via the right-hand side of its [[ ... ]] conditional (bash's superior alternative to the [ ... ] conditional):
[[ "$cpanel" == *"$cpanelstate" ]]
* represents any sequence of characters, so that this conditional returns true, if $cpanel ends with $cpanelstate (note how * must be unquoted to be recognized as a special pattern char.)

Resources