I got a job to compile 32bit and 64bit version of the code,then get the package to the central server,to compile the code,I have to run a shell script and will call another script,due to the two scripts have nearly 1k lines,I don't want to meger to one script.
I have see the answer,and It just sovle parts of the problem
run shell on remote-machine,Jason R. Coombs's anwser was great,when I run a shell scripts on local machine,In fact it runs in remote,what's most the output just shows in local machine,that's what's I want.for example when compile 32bit version failed ,I can see what's wrong on local machine and no need to ssh to the remote machine to compile again.
there two questions:
1.how to run two scripts in local machine I just dont't want to merge nearly 1k shell scripts together.
2.when I run the scripts,how to change the working dictory,example,I want the code run in /root/compile32 ,the shell scripts will git clone the code and compile && install using make and other actions.
Related
I have a couple bash scripts to automate the setup process for a new devices e.g. installing packages, configuring environment variables, etc.
I'm working on making the process more automated with autoexpect, and adding a few thing other things; however, it's difficult to test since every time I run the install script I have to manually go back an undo the changes that were made from running the script. Is there a way to run the scripts without actually installing anything so I can observe the behaviour for testing? something like the --dry-run option with rsync
for configuring your machine and being able to test this quickly and knowing you won't cause problems to your PC locally, create a VM using Virtual box or VMwWare player and then snapshot the VM so you can revert back to the state before you run the script, and then you can run your script on this VM, and check what configuration has been applied successfully.
Context
I want to run a bash script during the building stage of my CI.
So far, MacOS building works fine and Unix is in progress but I cannot execute the scripts in my Windows building stage.
Runner
We run a local gitlab runner on Windows 10 home where WSL is configured, Bash for Windows installed and working :
Bash executing in Windows powershell
Gitlab CI
Here is a small example that highlights the issue.
gitlab-ci.yml
stages:
- test
- build
build-test-win:
stage: build
tags:
- runner-qt-windows
script:
- ./test.sh
test.sh
#!/bin/bash
echo "test OK"
Job
Running with gitlab-runner 13.4.1 (e95f89a0)
on runner qt on windows 8KwtBu6r
Resolving secrets 00:00
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:01
Running on DESKTOP-5LUC498...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in C:/Gitlab-Ci/builds/8KwtBu6r/0/<company>/projects/player-desktop/.git/
Checking out f8de4545 as 70-pld-demo-player-ecran-player...
Removing .qmake.stash
Removing Makefile
Removing app/
Removing business/
Removing <company>player/
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:02
$ ./test.sh
Cleaning up file based variables 00:01
Job succeeded
Issue
As you can see, the echo message "test OK" is not visible in the job output.
Nothing seems to be executed but no error is shown and running the script on the Windows device directly works fine.
In case you are wondering, this is a Qt application built via qmake, make and deployed using windeployqt in a bash script (where the issue is).
Any tips or help would be appreciated.
edit : Deploy script contains ~30 lines which would make the gitlab-ci yaml file hard to read if the commands are put directly in the yaml instead of an external shell executed during the CI.
Executing the script from the Windows env
It may be due to gitlab opened a new window to execute bash so stdout not captured.
You can try use file system based methods to check the execution results, such as echo to files. The artifact can be specified with wildcard for example **/*.zip.
I also tested on my windows machine. First if i run ./test.sh in powershell, it will prompt dialog to let me select which program to execute. the default is git bash. That means on your machine you may have configured one executable (you'd better find it out)
I also tried in powershell:
bash -c "mnt/c/test.sh"
and it gives me test OK as expected, without new window.
So I suggest you try bash -c "some/path/test.sh" on your gitlab.
my project builds under windows and linux.I have setup a gitlab-runner on windows and one on a linux machine. Now I want to configure the ".gitlab-ci.yml" for building on both machines. BUT, depending on the operating system, I'd like to call a different build script for the build.
Example ".gitlab-ci.yaml" (not working)
mybuild:
# on linux
script:
- ./build-linux.sh
# on windows
script
- buildwin.bat
How can i achieve this in the .gitlab-ci.yml?
You can't. The way to achieve it is to
give your runners unique tags. e.g. "linux-runner" and "windows-runner"
duplicate the job and run one job only on runners with the tag "linux-runner" and the second job only on runners with the "windows-runner" tag.
linux build:
stage: build
tags:
- linux-runner
script:
- ./build-linux.sh
windows build:
stage: build
tags:
- windows-runner
script:
- buildwin.bat
See also https://stackoverflow.com/a/49199201/2779972
The solution generally suggested to create two jobs doesn't fit my needs. My need is to be able to use a Windows or on a Linux/MacOS runner, whatever is the one available.
My suggested trick is to create a call script in /usr/local/bin so it can mimic the Windows call command:
#/bin/bash
./$*
If you want to invoke Gradle wrapper for example, you can simply write in the gitlab-ci.yml:
script:
- call gradle
it also works with a specific script (for instance "build.bat" for Windows, and "build" for MacOS/Linux):
script:
- call build
I hope that will help someone with the same need as me.
This solution works similar to what #christophe-moine suggests, but without the need for creating a call script or alias.
Provided that your Windows CI runner runs Windows PowerShell (which is likely), you may simply create two scripts, e.g.
buildmyapp (for Linux – note the missing extension!)
buildmyapp.cmd (for Windows)
... and then execute them in GitLab CI using the Unix-style syntax, without the script extension, e.g.
mybuild:
script:
- ./buildmyapp
parallel:
matrix:
- PLATFORM: [linux, windows]
tags:
- ${PLATFORM}
In the script: block, Windows PowerShell will pick buildmyapp.cmd on the Windows runner, the Linux shell with pick the buildmyapp script on the Linux runner.
The parallel: matrix: keyword in combination with tags: creates two parallel jobs that pick your CI runners via the tags keyword.
I am using the Jenkins Azure VM Agents Plugin with a Linux Master, to launch jobs on Windows agents.
I have been through all the configuration steps and everything works fine until I try to use Docker on the agents.
My pipeline script:
pipeline {
agent {
docker {
image 'myurl.io/myimage:latest'
registryUrl 'https://myurl.io/'
registryCredentialsId '123456789abcdefg'
}
}
The pipeline appears to fails on when it runs this command:
docker pull myurl.io/myimage:latest
The error reported comes down to this:
Caused: java.io.IOException: Cannot run program "nohup" (in directory "C:\Jenkins\workspace\Test Pipeline Docker"): CreateProcess error=2, The system cannot find the file specified
Some notes:
I have ticked the box to install git on the image:
The Git tools appear to be successfully installed on the agent VM
This question seems to be related but is it not exactly the same
I am not running the sh command directly, it is being run by the plugin.
I do not think I have access to set the PATH at this stage
This issue on JIRA https://issues.jenkins-ci.org/browse/JENKINS-36776 is related, but it does not seems to be fixed and the suggested workarounds don't seem to apply to my situation
My question
Is there a way to get my pipeline script to work? Maybe there are some extra commands I can somehow execute on the agent after it launches - but before the docker pull command - to add the required directories to the PATH?
Or is there some other workaround?
I think you were on the right track with the question you already found:
Jenkins pipeline sh fail with "cannot run program nohup" on windows
But, according to the wiki page of the docker-pipeline plugin, running docker on windows workers is not supported (a bit hidden though...):
For Jenkins environments which have macOS, Windows, or other agents, which are unable to run the Docker daemon, this default setting may be problematic. [https://www.jenkins.io/doc/book/pipeline/docker/#specifying-a-docker-label]
As far as I can see, there were several tries to add that feature, but it doesn't seem to be added (yet): https://github.com/jenkinsci/docker-workflow-plugin/pull/148
In the last link it is also stated, that fixing the sh/nohup issue will not be your only problem, for example the docker plugin will try to run id to get the user.
Nevertheless, you could try to make linux commands available by editing the path in your pipeline declaration:
https://stackoverflow.com/a/45101214/12338776
EDIT:
Just saw this question is 3 years old... Well. But since there was no answer so far, and a lot of people still seem to get here, it might still help someone.
I have Jenkins running on Windows, and I have a build that works fine under CygWin bash from the CygWin terminal, so I now want to automate it. However, using this script:
#!C:\cygwin\bin\bash.exe
whoami
make
The system reports me as nt authority\system, not the ken that I get when using an interactive shell. Is there an easy way to persuade Jenkins or CygWin to run as me?
Most likely you are running jenkins with default installation. You have two options. First is mentioned in the comment. Change the "Service account" to be same as yours.
Second option is derived from best practices. Run the jenkins master on a system with backup etc. Configure slave node with your account credentials. Change the project configuration to build on the specific node.
(It is possible to run slave and master on same machine with different credentials - just in case you want to try out things)
The real problem I was having was not that the shell script was running as the wrong user, but that the shell script was not executing the default /etc/profile. So, the solution was simply:
#!C:\cygwin\bin\bash.exe -l
whoami
make
I was still nt authority\system, but now I had the correct environment set up and could run make successfully.
Note also that if I create a /home/system directory I can add .bash_profile, etc, to that directory to further customise the build environment.