Compilation error in fake script doesn't show specific failure in TeamCity build failure summary - teamcity

Using Fake 5.0 on TeamCity. Prior to 5.0 if there was a compilation error the error would be visible in the build failure summary. However now moving to 5.0 if there is an error the details in the summary are the generic output from Fake.
In order to diagnose you have to then dig through the logs to find the compilation error.
This may not be specific to TeamCity because the same outputs are reported from the console.
Wondering if there is a configuration that I am missing either in the way that fake is being run or how the tasks are configured that needs to be set to allow the actual error to propagate up.
Running build script from TeamCity using bash:
%env.BashPath% build.sh run build.fsx
Bash script as per the getting started examples:
#!/usr/bin/env bash
set -eu
set -o pipefail
# liberated from https://stackoverflow.com/a/18443300/433393
realpath() {
OURPWD=$PWD
cd "$(dirname "$1")"
LINK=$(readlink "$(basename "$1")")
while [ "$LINK" ]; do
cd "$(dirname "$LINK")"
LINK=$(readlink "$(basename "$1")")
done
REALPATH="$PWD/$(basename "$1")"
cd "$OURPWD"
echo "$REALPATH"
}
TOOL_PATH=$(realpath .fake)
FAKE="$TOOL_PATH"/fake
if ! [ -e "$FAKE" ]
then
dotnet tool install fake-cli --tool-path $TOOL_PATH --version 5.*
fi
"$FAKE" "$#"
Running the MSBuild task:
Target.create "Build" (fun _ ->
solutionFile
|> MSBuild.build (fun p ->
{ p with
ToolsVersion = Some "15.0"
Verbosity = Some(Quiet)
Targets = ["Build"]
Properties = ["Optimize", "True"
"DebugSymbols", "True"
"Configuration", "Release"
"RunCodeAnalysis", "True"
"CodeAnalysisGenerateSuccessFile", "False"]
}))

Related

I am stuck trying to let GitLab CI run a shell file that builds Unity Player. It works when I do it but not when GitLab Tries it

The Problem
I used a tutorial to create a shell file that would build an Unity player for me. Tutorial I used. But when I try to let GitLab's CI call this shell script it seems only clean-up the directory and just ignore the Unity build command. Console output
My .yml file looks like this: (Important is the build-job: section all the other stuff works)
stages: # List of stages for jobs, and their order of execution
- build
- test
build-job: # This job runs in the build stage, which runs first.
stage: build
script:
- echo "Compiling the code..."
- ./Build/build.sh
- echo "Compile complete."
unit-test-job: # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- C:/"Program Files"/Unity/Hub/Editor/2020.3.20f1/Editor/Unity.exe \
-batchmode \
-projectPath C:/Users/joshu/Desktop/Gamedev/Testing/testing-asignment-2/"Testing_Asignment 2" \
-runTests -testPlatform editmode \
-logFile . \
-testResults ./unit-tests.xml \
-quit
- echo "Code is tested"
lint-test-job: # This job also runs in the test stage.
stage: test # It can run at the same time as unit-test-job (in parallel).
script:
- echo "Linting code... This will take about 10 seconds."
- echo "No lint issues found."
My shell script contains this:
echo Cleaning Up Build Folder
rm -rf ./Build/Builds
echo Starting Build Process
C:/"Program Files"/Unity/Hub/Editor/2020.3.20f1/Editor/Unity.exe -quit -projectPath ../"Testing Asignment 2" -executeMethod Building.MyBuildScript.PreformBuild
echo Ended Build Process
My building script contains this:
using System;
using System.IO;
using UnityEditor;
using UnityEditor.Build.Reporting;
namespace Building
{
public class MyBuildScript
{
public static void PreformBuild()
{
BuildPlayerOptions buildPlayerOptions = new BuildPlayerOptions();
buildPlayerOptions.scenes = new[] { "Assets/Scenes/SampleScene.unity" };
buildPlayerOptions.locationPathName = "../Build/Builds/WindowsBuild/Windows64Build.x86_64";
buildPlayerOptions.target = BuildTarget.StandaloneWindows64;
buildPlayerOptions.options = BuildOptions.None;
BuildReport report = BuildPipeline.BuildPlayer(buildPlayerOptions);
using StreamWriter writer = File.CreateText("../Build/Builds/WindowsBuild/results.txt");
writer.Write(
$"Build result: {report.summary.result} \n" +
$"Process time: \n" +
$" start: {report.summary.buildStartedAt} \n" +
$" end: {report.summary.buildEndedAt} \n" +
$" total: {report.summary.totalTime} \n" +
$"{report.summary.totalErrors} Errors found{(report.summary.totalErrors > 0 ? "!" : "")}");
}
}
}
I think that there is some thing wrong with the access of my Unity.exe but when I open security settings on windows the excess to write/execute is enabled.
Things I tried
I also tried just not using a shell script and calling the MyBuildScript from the .yml directly.
build-job: # This job runs in the build stage, which runs first.
stage: build
script:
- echo "Compiling the code..."
- C:/"Program Files"/Unity/Hub/Editor/2020.3.20f1/Editor/Unity.exe -quit -projectPath ./"Testing Asignment 2" -executeMethod Building.MyBuildScript.PreformBuild
- echo "Compile complete."
But this also seemed to not execute Unity and just skip the command.
Furthermore I tried using the -buildWindows64Player instead of the -executeMethod.
build-job: # This job runs in the build stage, which runs first.
stage: build
script:
- echo "Compiling the code..."
- C:/"Program Files"/Unity/Hub/Editor/2020.3.20f1/Editor/Unity.exe -quit -projectPath ./"Testing Asignment 2" -buildWindows64Player ./Build/Builds/WindowsBuild
- echo "Compile complete."
But this also seemed have skipped Unity. However this way of trying did take the longest to finish. All the other options were done in a few seconds but doing it like this it took about a minute or two. I don't know why but if I had to guess it would be because it actually did start-up Unity and failed somewhere on the way.
Note
I know that I should add -batchmode to the shell commands but first I want to see Unity open so that I know it is doing something.
Little update as of 12-29-2021 23:48. I noticed that the unit-test also wasn't realy working but I fixed this by changing my Unity version to 2021.2.7f1, changing the Unity project folder structure and updating the unit-test section to this:
unit-test-job: # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- C:/"Program Files"/Unity/Hub/Editor/2021.2.7f1/Editor/Unity.exe -batchmode -projectPath C:/Users/joshu/Desktop/Gamedev/Testing/testing-asignment-2/"Testing Asignment 2" -runTests -testPlatfrom editmode -logFile -testResults ./unit-tests.xml | Out-Default
Well turns out I did something stupid. I had my runner installed on my laptop and kept trying to push new changes from my pc. However I used the projectPath that pointed to my local project on the laptop and not the project that the runner was cloning from GitLab. So the -projectPath now points to ./"Testing Asignment 2". This also was the reason I thought changing my Unity version helped because I pulled the project to my laptop while installing a new version.
I also discoverd that my test needed to have "| Out-Default" behind it to actually sort of read the test data. StackOverflow page that helped with this
I don't realy use the shell script anymore now but I just wanted it building Unity, and that works!
The .yml file I am using right now looks like this:
stages: # List of stages for jobs, and their order of execution
- build
- test
build-job: # This job runs in the build stage, which runs first.
stage: build
script:
- echo "Compiling the code..."
- C:/"Program Files"/Unity/Hub/Editor/2021.2.7f1/Editor/Unity.exe -batchmode -quit -projectPath ./"Testing Asignment 2" -executeMethod Building.MyBuildScript.PreformBuild | Out-Default
- echo "Compile complete."
unit-test-job: # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- C:/"Program Files"/Unity/Hub/Editor/2021.2.7f1/Editor/Unity.exe -batchmode -quit -projectPath ./"Testing Asignment 2" -runTests -testPlatfrom editmode -logFile -testResults ./unit-tests.xml | Out-Default
(I also changed my gitignore to not ignore Library\ScriptAssemblies from Unity. But I don't know if this is needed.)

Variable expansion of trigger branch property prevents downstream pipeline from being created

A branch job in which the branch property of the trigger property is using a variable will always fail with reason: downstream pipeline can not be created.
Steps to reproduce
Set up a downstream pipeline with a trigger property as you would normally.
Add a branch property to the trigger property. Write the name of an existing branch on the downstream repository, like master/main or the name of a feature branch.
Run the pipeline and observe that the downstream pipeline is successfully created.
Now change the branch property to use a variable instead, like branch: $CI_TARGET_BRANCH.
Manually run the CI pipeline with that, setting variable through the GitLab GUI.
The job will instantly fail with reason: downstream pipeline can not be created.
Code example
The goal is to create a GitLab CI config that runs the pipeline of a specified downstream branch. The bug occurs when attempting to do it with a variable.
This works, creating a downstream pipeline like normal. But the branch name is hardcoded:
stages:
- deploy
deploy:
variables:
environment: dev
stage: deploy
trigger:
project: group/project
branch: foo
strategy: depend
This does not work; although TARGET_BRANCH is set successfully, the job fails because the downstream pipeline can not be created:
stages:
- removeme
- deploy
before_script:
- if [ -z "$TARGET_BRANCH" ]; then TARGET_BRANCH="main"; fi
- echo $TARGET_BRANCH
test_variable:
stage: removeme
script:
- echo $TARGET_BRANCH
deploy:
variables:
environment: dev
stage: deploy
trigger:
project: group/project
branch: $TARGET_BRANCH
strategy: depend
If you know what I'm doing wrong, or you have something that does work with variable expansion of the branch property, please share it (along with your GitLab version). Alternate solutions are also welcome, but this one seems like it should work.
GitLab Version on which bug occurs
Self-hosted GitLab Community Edition 12.10.7
What is the current bug behavior?
The job always fails for reason: downstream pipeline can not be created.
What is the expected correct behavior?
The branch property should be set to the value of the variable and the downstream pipeline should be created as normal, just as if you simply hardcoded/typed the name of the branch.
More details
The ability to use variable expansion in the trigger branch property was added in v12.4, and it's explicitly mentioned in the docs.
I searched for other .gitlab-ci.yml / GitLab config files. Every single one that attempted to use variable expansion in the branch property had it commented out, saying it was bugged for an unknown reason (example.
I haven't been able to find a repository in which someone claimed to have a working variable expansion for the branch property of the trigger property.
Unfortunately, the alternate solutions are either (a) hardcoding every downstream branch name into the GitLab CI config of the upstream project, or (b) not being able to test changes to the downstream GitLab CI config without first committing them to master/main, or having to use only/except.
TL;DR: How to use the value of a variable for the branch property of a bridge job? My current solution makes it so the job fails and the downstream pipeline isn't created.
this is a 'works as designed', and gitlab will improve in upcoming releases.
trigger job will pretty weak b/c it is not a full job that runs on a runner. Therefore most of the trigger configuration needs to be hardcoded.
I use direct API calls to trigger downstream jobs passing the CI_JOB_TOKEN which links the upstream job to downstream as the trigger does
API calls give you full control
curl -X POST \
-s \
-F token=${CI_JOB_TOKEN} \
-F "ref=${REF_NAME}" \
-F "variables[STAGE]=${STAGE}" \
"${CI_SERVER_URL}/api/v4/projects/${CI_PROJECT_ID}/trigger/pipeline"
now this will not wait and monitor for when the job is done so you will need to code for that if you need to wait for the downstream job to finish,
Moreover, CI_JOB_TOKEN cannot be used to get the status of the downstream job, so you will another token for that.
- |
DOWNSTREAM_RESULTS=$( curl --silent -X POST \
-F token=${CI_JOB_TOKEN} \
-F "ref=${DOWNSTREAM_PROJECT_REF}" \
-F "variables[STAGE]=${STAGE}" \
-F "variables[SLS_PACKAGE_PATH]=.serverless-${STAGE}" \
-F "variables[INVOKE_SLS_TESTS]=false" \
-F "variables[UPSTREAM_PROJECT_REF]=${CI_COMMIT_REF_NAME}" \
-F "variables[INSTALL_SLS_PLUGINS]=${INSTALL_SLS_PLUGINS}" \
-F "variables[PROJECT_ID]=${CI_PROJECT_ID}" \
-F "variables[PROJECT_JOB_NAME]=${PROJECT_JOB_NAME}" \
-F "variables[PROJECT_JOB_ID]=${PROJECT_JOB_ID}" \
"${CI_SERVER_URL}/api/v4/projects/${DOWNSTREAM_PROJECT_ID}/trigger/pipeline" )
echo ${DOWNSTREAM_RESULTS} | jq .
DOWNSTREAM_PIPELINE_ID=$( echo ${DOWNSTREAM_RESULTS} | jq -r .id )
echo "Monitoring Downstream pipeline ${DOWNSTREAM_PIPELINE_ID} status..."
DOWNSTREAM_STATUS='running'
COUNT=0
PIPELINE_API_URL="${CI_SERVER_URL}/api/v4/projects/${DOWNSTREAM_PROJECT_ID}/pipelines/${DOWNSTREAM_PIPELINE_ID}"
echo "Pipeline api endpoint => ${PIPELINE_API_URL}"
while [ ${DOWNSTREAM_STATUS} == "running" ]
do
if [ $COUNT -eq 0 ]
then
echo "Starting loop"
fi
if [ ${COUNT} -ge 350 ]
then
echo 'TIMEOUT!'
DOWNSTREAM_STATUS="TIMEOUT"
break
elif [ $(( ${COUNT} % 60 )) -eq 0 ]
then
echo "Downstream pipeline status => ${DOWNSTREAM_STATUS}"
echo "Count => ${COUNT}"
sleep 10
else
sleep 10
fi
DOWNSTREAM_CALL=$( curl --silent --header "PRIVATE-TOKEN: ${GITLAB_TOKEN}" ${PIPELINE_API_URL} )
if [ $COUNT -eq 0 ]
then
echo ${DOWNSTREAM_CALL} | jq .
fi
DOWNSTREAM_STATUS=$( echo ${DOWNSTREAM_CALL} | jq -r .status )
COUNT=$(( ${COUNT} + 1 ))
done
#pipeline status is running, failed, success, manual
echo "PIPELINE STATUS => ${DOWNSTREAM_STATUS}"
if [ ${DOWNSTREAM_STATUS} != "success" ]
then
exit 2
fi

How to compare a variable and set the value in alpine linux [duplicate]

This question already has answers here:
Dockerfile if else condition with external arguments
(14 answers)
Closed 3 years ago.
Never wrote any shell script before but after extensively googling it I came up with the following code for my docker file. But don't understand why is it doesn't work.
###stage 2####################
FROM nginx:alpine
##########Calculate the environment type #########
ARG BUILD_TYPE
####echo of build build_type does gives me output of Development when passed argument is Development.
RUN if [ "$BUILD_TYPE" = "Development" ]; then BUILD_TYPE='dev'; fi
RUN if [ "$BUILD_TYPE" = "Production" ]; then BUILD_TYPE='prod'; fi
RUN echo "UI BUILD_TYPE=$BUILD_TYPE---------"
##########Calculate the environment type #########
The above echo always comes as Development.
UPDATE
Now I built a sample in a separate docker file to isolate the issue. After this I realised that the assignment is not happening though the condition matched.
Here is the new sample docker file code.
FROM nginx:alpine
ARG BUILD_TYPE
ARG ENV_TYPE
RUN if [ "$BUILD_TYPE" = "Development" ]; then ENV_TYPE='dev'; echo "matched dev"; fi
RUN if [ "$BUILD_TYPE" = "Production" ]; then ENV_TYPE="prod"; echo "matched prod"; fi
RUN echo "UI BUILD_TYPE=$BUILD_TYPE ENV_TYPE = $ENV_TYPE---------"
The output is
matched dev
UI BUILD_TYPE=Development ENV_TYPE = ---------
I see ENV_TYPE is empty.
Each RUN command in a Dockerfile is executed in a separate shell session, so when you set BUILD_TYPE, you are setting an environment variable for that session only, which overrides the build-argument. You are not overwriting the build-argument for the entire docker build.
You can see this by the fact that if you change your if statements to:
RUN if [ "$BUILD_TYPE" = "Development" ]; then BUILD_TYPE='dev'; fi; echo $BUILD_ENV
RUN if [ "$BUILD_TYPE" = "Production" ]; then BUILD_TYPE='prod'; fi; echo $BUILD_ENV
The env var is correctly set, and echoed at the end of the line but your final echo will still return the build argument.
If you instead put these statements in a shell script and run that instead, it works just fine:
build.sh:
####echo of build build_type does gives me output of Development when passed argument is Development.
if [ "$BUILD_TYPE" = "Development" ]; then BUILD_TYPE='dev'; fi
if [ "$BUILD_TYPE" = "Production" ]; then BUILD_TYPE='prod'; fi
echo "UI BUILD_TYPE=$BUILD_TYPE---------"
##########Calculate the environment type #########
Dockerfile:
###stage 2####################
FROM nginx:alpine
##########Calculate the environment type #########
ARG BUILD_TYPE
COPY build.sh .
RUN ./build.sh
Output:
docker build --build-arg BUILD_TYPE=Production .
Sending build context to Docker daemon 166.9kB
Step 1/4 : FROM nginx:alpine
---> 36189e6707f4
Step 2/4 : ARG BUILD_TYPE
---> Running in cab2e8749e7e
Removing intermediate container cab2e8749e7e
---> ea9ec7779909
Step 3/4 : COPY build.sh .
---> 336989bf6389
Step 4/4 : RUN ./build.sh
---> Running in ecd09ee58780
UI BUILD_TYPE=prod---------
Removing intermediate container ecd09ee58780
---> ed9ca30af483
Successfully built ed9ca30af483

Protractor - run test again if fails

I'm using shell script to run protractor tests.
I want to make sure that if the test fails (exit code != 0) then it will run again - three times most.
I'm already using Teamcity, but Teamcity sends the 'FAIL' email and only then tries again. I want the test will run three times before sending a message.
this is part of my script:
if [ "$#" -eq 0 ];
then
/usr/local/bin/protractor proactor-config.js --suite=sanity
now I want to somehow check whether the Exit Code was 0 and of not - run again.
Thanks.
I wrote a small module to do this called protractor flake. It can be used via the cli
# defaults to 3 attempts
protractor-flake -- protractor.conf.js
Or programatically.
One nice thing here is that it will only re-run failed spec files, instead of your test suite.
There is a long standing feature request for this in the protractor issue queue. It probably won't be baked into the core of the framework.
function to check status
function test {
"$#"
local status=$?
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
test command1
test command2
If you use protractor with cucumber-js, you can choose to give each scenario (or all scenarios tagged as unstable) a number of retries:
./node_modules/cucumber/bin/cucumber-js --help
...
--retry <NUMBER_OF_RETRIES> specify the number of times to retry failing test cases (default: 0) (default: 0)
--retryTagFilter <EXPRESSION> only retries the features or scenarios with tags matching the expression (repeatable).
This option requires '--retry' to be specified. (default: "")
Unfortunately if every failed scenario has been successfully retried, Protractor will still return with exit code 1:
https://github.com/protractor-cucumber-framework/protractor-cucumber-framework/issues/176
As a workaround, when starting the protractor I append the following to its command line:
const directory = 'build';
ensureDirSync(directory);
const cucumberSummary = join(directory, 'cucumberSummary.log');
protractorCommandLine += ` --cucumberOpts.format=summary:${cucumberSummary} \
|| grep -P "^(\\d*) scenarios? \\(.*?\\1 passed" ${cucumberSummary} \
&& rm ${cucumberSummary}`;

Running a python script within a bash file within a Yii project

I have a Yii project that allows importing files.
Within this project I call the following command to try and convert xls files to csv:
$file = fopen($model->importfile->tempname,'r');
$filetype = substr($model->importfile, strrpos($model->importfile, '.')+1);
if ($filetype === 'xls')
{
$tempxls = $model->importfile->tempname;
$outputArr = array();
exec(Yii::app()->basePath."/commands/xlstocsv.sh " . $tempxls, $outputArr);
PropertiesController::xlsToConsoleV7Format($tempxls, $log);
}
xlstocsv.sh:
#!/bin/bash
# Try to autodetect OOFFICE and OOOPYTHON.
OOFFICE=`ls /usr/bin/libreoffice /usr/lib/libreoffice/program/soffice /usr/bin/X11/libreoffice | head -n 1`
OOOPYTHON=`ls /usr/bin/python3 | head -n 1`
XLS='.xls'
CSV='.csv'
INPUT=$1$XLS
OUTPUT=$1$CSV
cp $1 $INPUT
if [ ! -x "$OOFFICE" ]
then
echo "Could not auto-detect OpenOffice.org binary"
exit
fi
if [ ! -x "$OOOPYTHON" ]
then
echo "Could not auto-detect OpenOffice.org Python"
exit
fi
echo "Detected OpenOffice.org binary: $OOFFICE"
echo "Detected OpenOffice.org python: $OOOPYTHON"
# Start OpenOffice.org in listening mode on TCP port 2002.
$OOFFICE "-accept=socket,host=localhost,port=2002;urp;StarOffice.ServiceManager" -norestore -nofirststartwizard -nologo -headless &
# Wait a few seconds to be sure it has started.
sleep 5s
# Convert as many documents as you want serially (but not concurrently).
# Substitute whichever documents you wish.
$OOOPYTHON /fullpath/DocumentConverter.py $INPUT $OUTPUT
# Close OpenOffice.org.
cp $OUTPUT $1
DocumentConverter.py:
This can be found here: https://github.com/mirkonasato/pyodconverter. It has been slightly modified to have correct syntax for python3.
Ok, the issue is, when running the php code from the terminal, it correctly creates the csv file from the excel file.
However, when running it from within the browser, it still runs the script and creates the output file, but it has not correctly converted it into csv.
It all works perfectly for every file I have thrown at it so far when running from console, but for some reason when running it from within a browser, it fails to convert the file properly.
Any ideas for what could be going wrong?
Thanks alejandro, permission errors seemed to be the issue. Also I needed to move the .config/librroffice folder into apaches home directory.

Resources