my project builds under windows and linux.I have setup a gitlab-runner on windows and one on a linux machine. Now I want to configure the ".gitlab-ci.yml" for building on both machines. BUT, depending on the operating system, I'd like to call a different build script for the build.
Example ".gitlab-ci.yaml" (not working)
mybuild:
# on linux
script:
- ./build-linux.sh
# on windows
script
- buildwin.bat
How can i achieve this in the .gitlab-ci.yml?
You can't. The way to achieve it is to
give your runners unique tags. e.g. "linux-runner" and "windows-runner"
duplicate the job and run one job only on runners with the tag "linux-runner" and the second job only on runners with the "windows-runner" tag.
linux build:
stage: build
tags:
- linux-runner
script:
- ./build-linux.sh
windows build:
stage: build
tags:
- windows-runner
script:
- buildwin.bat
See also https://stackoverflow.com/a/49199201/2779972
The solution generally suggested to create two jobs doesn't fit my needs. My need is to be able to use a Windows or on a Linux/MacOS runner, whatever is the one available.
My suggested trick is to create a call script in /usr/local/bin so it can mimic the Windows call command:
#/bin/bash
./$*
If you want to invoke Gradle wrapper for example, you can simply write in the gitlab-ci.yml:
script:
- call gradle
it also works with a specific script (for instance "build.bat" for Windows, and "build" for MacOS/Linux):
script:
- call build
I hope that will help someone with the same need as me.
This solution works similar to what #christophe-moine suggests, but without the need for creating a call script or alias.
Provided that your Windows CI runner runs Windows PowerShell (which is likely), you may simply create two scripts, e.g.
buildmyapp (for Linux – note the missing extension!)
buildmyapp.cmd (for Windows)
... and then execute them in GitLab CI using the Unix-style syntax, without the script extension, e.g.
mybuild:
script:
- ./buildmyapp
parallel:
matrix:
- PLATFORM: [linux, windows]
tags:
- ${PLATFORM}
In the script: block, Windows PowerShell will pick buildmyapp.cmd on the Windows runner, the Linux shell with pick the buildmyapp script on the Linux runner.
The parallel: matrix: keyword in combination with tags: creates two parallel jobs that pick your CI runners via the tags keyword.
Related
Ok, this seems easy enough for Linux containers but I am trying to get this done using Windows Containers and its annoying that its so difficult.
I have a Windows Dockerfile which builds a project and part of the build process is to reversion the C# AssemblyInfo.cs files so that the built assemblies have a build version from the CI environment (Devops)
I am using a Powershell script https://github.com/microsoft/psi/blob/master/Build/ApplyVersionToAssemblies.ps1, it expects 2 Environment variables, one which I can hardcode so is not a problem, but the BUILD_BUILDNUMBER environment variable needs to be injected from Devops build system.
I have tried the following, none of which work
ARG BUILD_BUILDNUMBER
ENV BUILD_BUILDNUMBER=$BUILD_BUILDNUMBER
RUN ApplyVersionToAssemblies.ps1
and running
docker build -f Dockerfile --build-arg BUILD_BUILDNUMBER=1.2.3.4 .
also
RUN SETX BUILD_BUILDNUMBER $BUILD_BUILDNUMBER
RUN SETX BUILD_BUILDNUMBER %BUILD_BUILDNUMBER%
and a few other combinations that I dont recall, what I ended up doing which works but seems like a hack is to pass the BUILDNUMBER as a file via a COPY and then modifying the the Powershell script to read that into its local variable
So for the moment it works but I would really like to know how this is supposed to work via ARG and ENV for Windows Container builds
Windows Containers definitely feel like Linux containers poor cousin :)
Example for CMD in Docker Windows Containers:
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS="{\"endpointCredentials\": [{\"endpoint\":\"https://URL.com/_packaging/Name/nuget/v3/index.json\", \"username\":\"PATForPackages\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
SHELL ["cmd", "/S", "/C"]
RUN echo %VSS_NUGET_EXTERNAL_FEED_ENDPOINTS%
Context
I want to run a bash script during the building stage of my CI.
So far, MacOS building works fine and Unix is in progress but I cannot execute the scripts in my Windows building stage.
Runner
We run a local gitlab runner on Windows 10 home where WSL is configured, Bash for Windows installed and working :
Bash executing in Windows powershell
Gitlab CI
Here is a small example that highlights the issue.
gitlab-ci.yml
stages:
- test
- build
build-test-win:
stage: build
tags:
- runner-qt-windows
script:
- ./test.sh
test.sh
#!/bin/bash
echo "test OK"
Job
Running with gitlab-runner 13.4.1 (e95f89a0)
on runner qt on windows 8KwtBu6r
Resolving secrets 00:00
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:01
Running on DESKTOP-5LUC498...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in C:/Gitlab-Ci/builds/8KwtBu6r/0/<company>/projects/player-desktop/.git/
Checking out f8de4545 as 70-pld-demo-player-ecran-player...
Removing .qmake.stash
Removing Makefile
Removing app/
Removing business/
Removing <company>player/
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:02
$ ./test.sh
Cleaning up file based variables 00:01
Job succeeded
Issue
As you can see, the echo message "test OK" is not visible in the job output.
Nothing seems to be executed but no error is shown and running the script on the Windows device directly works fine.
In case you are wondering, this is a Qt application built via qmake, make and deployed using windeployqt in a bash script (where the issue is).
Any tips or help would be appreciated.
edit : Deploy script contains ~30 lines which would make the gitlab-ci yaml file hard to read if the commands are put directly in the yaml instead of an external shell executed during the CI.
Executing the script from the Windows env
It may be due to gitlab opened a new window to execute bash so stdout not captured.
You can try use file system based methods to check the execution results, such as echo to files. The artifact can be specified with wildcard for example **/*.zip.
I also tested on my windows machine. First if i run ./test.sh in powershell, it will prompt dialog to let me select which program to execute. the default is git bash. That means on your machine you may have configured one executable (you'd better find it out)
I also tried in powershell:
bash -c "mnt/c/test.sh"
and it gives me test OK as expected, without new window.
So I suggest you try bash -c "some/path/test.sh" on your gitlab.
I got a job to compile 32bit and 64bit version of the code,then get the package to the central server,to compile the code,I have to run a shell script and will call another script,due to the two scripts have nearly 1k lines,I don't want to meger to one script.
I have see the answer,and It just sovle parts of the problem
run shell on remote-machine,Jason R. Coombs's anwser was great,when I run a shell scripts on local machine,In fact it runs in remote,what's most the output just shows in local machine,that's what's I want.for example when compile 32bit version failed ,I can see what's wrong on local machine and no need to ssh to the remote machine to compile again.
there two questions:
1.how to run two scripts in local machine I just dont't want to merge nearly 1k shell scripts together.
2.when I run the scripts,how to change the working dictory,example,I want the code run in /root/compile32 ,the shell scripts will git clone the code and compile && install using make and other actions.
I have Jenkins running on Windows, and I have a build that works fine under CygWin bash from the CygWin terminal, so I now want to automate it. However, using this script:
#!C:\cygwin\bin\bash.exe
whoami
make
The system reports me as nt authority\system, not the ken that I get when using an interactive shell. Is there an easy way to persuade Jenkins or CygWin to run as me?
Most likely you are running jenkins with default installation. You have two options. First is mentioned in the comment. Change the "Service account" to be same as yours.
Second option is derived from best practices. Run the jenkins master on a system with backup etc. Configure slave node with your account credentials. Change the project configuration to build on the specific node.
(It is possible to run slave and master on same machine with different credentials - just in case you want to try out things)
The real problem I was having was not that the shell script was running as the wrong user, but that the shell script was not executing the default /etc/profile. So, the solution was simply:
#!C:\cygwin\bin\bash.exe -l
whoami
make
I was still nt authority\system, but now I had the correct environment set up and could run make successfully.
Note also that if I create a /home/system directory I can add .bash_profile, etc, to that directory to further customise the build environment.
I have 3 servers for PROD, with the same deployment build configuration, I choose which server to deploy depending on a build parameter.
The issue is that reviewing the history you can't check which environment are you deploying.
I wonder if it's possible one of this solutions:
- Show parameters in the history of a build
- Autotag a build with parameters
I hope I had explained well enough.
Thanks in advance
You can accomplish this with a Command Line build step that echoes the relevant parameters to the build log. For a Windows-based agent, you could do something like:
Run: Executable with parameters
Command Executable: echo
Command Parameters: Deployed to server %your.server.host%
This would simply add a line to the build log that reads, Deployed to server FOO
Autotagging would be pretty cool, but I don't know of a way to do that.