I am using the Matlab Docker Image to run Matlab in a Container and do calculations within it.
https://github.com/mathworks-ref-arch/matlab-dockerfile
When you commonly start the container with docker run() it spins up a command line with running Matlab which is waiting for user Input.
My goal is to use the Docker Image as a build slave in Jenkins, so it spinns up the Container and kills it automatically after usage. The desired Matlab commands should be implemented in the Jenkins Job. So far it is all set up and Jenkins is starting Matlab in a Docker slave environment and then exits:
[Pipeline] sh
+ chmod +x startmatlab.sh [Pipeline] sh
+ ./startmatlab.sh MATLAB is selecting SOFTWARE OPENGL rendering.
< M A T L A B (R) >
Copyright 1984-2019 The MathWorks, Inc.
R2019b (9.7.0.1190202) 64-bit (glnxa64)
August 21, 2019
To get started, type doc. For product information, visit www.mathworks.com.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[withMaven] pipelineGraphPublisher - triggerDownstreamPipelines
[withMaven] downstreamPipelineTriggerRunListener - completed in 58 ms Finished: SUCCESS
The startmatlab.sh is the script provided by Mathworks to start matlab in the container and looks like this.
#!/bin/bash
#
# Copyright 2019 The MathWorks, Inc.
ECHO=echo
#=======================================================================
build_cmd () { # Takes the cmd input string and outputs the same
# string correctly quoted to be evaluated again.
#
# Always returns a 0
#
# usage: build_cmd
#
# Use version of echo here that will preserve
# backslashes within $cmd. - g490189
$ECHO "$1" | awk '
#----------------------------------------------------------------------------
BEGIN { squote = sprintf ("%c", 39) # set single quote
dquote = sprintf ("%c", 34) # set double quote
}
NF != 0 { newarg=dquote # initialize output string to
# double quote
lookquote=dquote # look for double quote
oldarg = $0
while ((i = index (oldarg, lookquote))) {
newarg = newarg substr (oldarg, 1, i - 1) lookquote
oldarg = substr (oldarg, i, length (oldarg) - i + 1)
if (lookquote == dquote)
lookquote = squote
else
lookquote = dquote
newarg = newarg lookquote
}
printf " %s", newarg oldarg lookquote }
#----------------------------------------------------------------------------
'
return 0
}
ARGLIST=""
while [ $# -gt 0 ]; do
case "$1" in
-r|-batch)
QUOTED_CMD=`build_cmd "$2"`
ARGLIST="${ARGLIST} $1 `$ECHO ${QUOTED_CMD}`"
shift
;;
*)
ARGLIST="${ARGLIST} $1"
esac
shift
done
eval exec "matlab ${ARGLIST}"
exit
In my Jenkins Job I defined a stage that is just executing that script and starts Matlab which is working fine:
stage('test2'){
sh 'chmod +x startmatlab.sh'
sh './startmatlab.sh'
sh '1+1'
}
The only thing is that I am not able to get any commands from the pipeline in Jenkins to be executed within the containers command line. I tried to just do a simple '1+1' and put it under the './startmatlab.sh' without success.
/home/jenkins/workspace/DR/docker.MatlabR2019b#tmp/durable-f5af635b/script.sh: 1: /home/jenkins/workspace/DR/docker.MatlabR2019b#tmp/durable-f5af635b/script.sh: 1+1: not found
So my question is if anyone can help me resolve how or where I have to put my commands to be executed here...
My Thanks in advance!
I was facing similar issue, I wanted to execute MATLAB commands inside the container once it spins up. Below is the way suggested by MathWorks.
sh 'matlab -batch ver'
For information refer:
use matlab -batch option
Related
I am trying to use Promox VE has the hypervisor for running VMs.
In one of my VMs, I have a hookscript that is written for the bash shell:
#!/bin/bash
if [ $2 == "pre-start" ]
then
echo "gpu-hookscript: Resetting GPU for Virtual Machine $1"
echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
echo 1 > /sys/bus/pci/rescan
fi
which is to help with enabling GPU passthrough.
And then I have another hookscript that is written in Perl, which enables virtio-fs:
#!/usr/bin/perl
# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', #ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
if ($phase eq 'pre-start') {
# First phase 'pre-start' will be executed before the guest
# ist started. Exiting with a code != 0 will abort the start
print "$vmid is starting, doing preparations.\n";
system('/var/lib/vz/snippets/launch-virtio-daemon.sh');
# print "preparations failed, aborting."
# exit(1);
} elsif ($phase eq 'post-start') {
# Second phase 'post-start' will be executed after the guest
# successfully started.
print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
# Third phase 'pre-stop' will be executed before stopping the guest
# via the API. Will not be executed if the guest is stopped from
# within e.g., with a 'poweroff'
print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
# Last phase 'post-stop' will be executed after the guest stopped.
# This should even be executed in case the guest crashes or stopped
# unexpectedly.
print "$vmid stopped. Doing cleanup.\n";
} else {
die "got unknown phase '$phase'\n";
}
exit(0);
What would be the best way for me to combine these two files into a single format, so that I can use it as a hookscript in Proxmox?
I tried reading the thread here about how to convert a bash shell script to Perl, and not being a programmer, admittedly, I didn't understand what I was reading.
I appreciate the teams help in educating a non-programmer.
Thank you.
before
system('/var/lib/vz/snippets/launch-virtio-daemon.sh');
insert pls.
system('echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove');
system('echo 1 > /sys/bus/pci/rescan');
Had your original code above evaluated return code of these perl calls (it is not the case):
echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
echo 1 > /sys/bus/pci/rescan
you could apply solutions from:
Getting Perl to return the correct exit code
I have this stage in my Jenkins pipeline that runs a command and stores the output in a variable. I'm trying to get the id number from the stored string but getting the error bad ${} modifier. Should have printed 00062100. It works correctly in the console.
stage('test') {
agent {node 'test'}
steps{
sh "string=$(onetstat -a -P 1111)"
sh "echo ${string:6:8}"
}
}
output from the command("BUILD 00062100 Listen")
**Update:**
stage('server2') {
agent {node 'test'}
steps{
sh '''
var="$(onetstat -a -P 1111)"
echo ${var:6:8}
'''
}
}
**log of the run**
[Pipeline] sh
+ + onetstat -a -P 1111
+ 1<TMP> /tmp/shGgcEdAGgA
var=
BUILDER8 00069B50 Listen
Local Socket: 127.0.0.1..1111
Foreign Socket: 0.0.0.0..0
/Build#tmp/durable-a93a2921/script.sh 3: FSUM7728 bad ${} modifier
There are two misunderstandings in your example. When you use double quotes in the Jenkinsfile, you construct a Groovy String that substitutes variables (defined using $ sign) with associated values (or expressions.)
Another misunderstanding is creating a bash variable in one sh step and accessing it in another sh step. It won't work that way. Each sh step runs in its own shell process, and any local variable created in one shell cannot be accessed in another.
You can solve both issues. Firstly, you need to replace double quotes with single quotes in sh step. Secondly, you need to define shell script in a single sh step. You can use Groovy multiline string for that (triple quotes.) Consider the following example:
pipeline {
agent any
stages {
stage("Test") {
steps {
// Below code prints nothing
sh 'something="BUILD 00062100 Listen"'
sh 'echo ${something:6:8}'
// Below code prints 00062100
sh '''
something="BUILD 00062100 Listen"
echo ${something:6:8}
'''
}
}
}
}
Output:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
+ something='BUILD 00062100 Listen'
[Pipeline] sh
+ echo
[Pipeline] sh
+ something='BUILD 00062100 Listen'
+ echo 00062100
00062100
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Used this solution to get the id from the command output
var=$(onetstat -a -P 1111)
var=$(echo $var | cut -b 6-10)
I am trying to execute a shell script on a windows node using Jenkins.
The bash script uses sort -u flag in one of the steps to filter out unique elements from an existing array
list_unique=($(echo "${list[#]}" | tr ' ' '\n' | sort -u | tr '\n' ' '))
Note - shebang used in the script is #!/bin/bash
On calling the script from command prompt as - bash test.sh $arg1
I got the following error -
-uThe system cannot find the file specified.
I understand the issue was that with the above call, sort.exe was being used from command prompt and not the Unix sort command. To get around this I changed the path variable in Windows System variables and moved \cygwin\bin ahead of \Windows\System32
This fixed the issue and the above call gave me the expected results.
However, When the same script is called on this node using Jenkins, I get the same error again
-uThe system cannot find the file specified.
Jenkins stage calling the script
stage("Run Test") {
options {
timeout(time: 5, unit: 'MINUTES')
}
steps {
script {
if(fileExists("${Test_dir}")){
dir("${Test_dir}"){
if(fileExists("test.sh")){
def command = 'bash test.sh ${env.arg1}'
env.output = sh(returnStdout: true , script : "${command}").trim()
if (env.output == "Invalid"){
def err_msg = "Error Found."
sh "echo -n '" + err_msg + " ' > ${ERR_MSG_FILE}"
error(err_msg)
}
sh "echo Running tests for ${env.output}"
}
}
}
}
}
}
Kindly Help
I try to print two variables in Jenkins shell (one of which is global one) . When I print them independently on shell for each it works, however when I try both variables on single line it fails. See the output, seems like a crop after the first variable .
I've tried to print two local variables, and it seems working. However I need the global one
#!/usr/bin/env groovy
def START
node ('master') {
// options{
// timestamps()
// }
stage("one") {
script{
START = sh(script: 'date --utc +%FT%T', returnStdout: true)
}
stage("two") {
def END = sh(script: 'date --utc +%FT%T', returnStdout: true)
sh "echo start $START"
sh "echo end $END"
sh "echo $START and $END"
}
}
}
+ date --utc +%FT%T
[Pipeline] sh
+ echo start 2019-08-01T14:48:08
start 2019-08-01T14:48:08
[Pipeline] sh
+ echo end 2019-08-01T14:48:09
end 2019-08-01T14:48:09
[Pipeline] sh
+ echo 2019-08-01T14:48:08
2019-08-01T14:48:08
+ and 2019-08-01T14:48:09
/var/jenkins_home#tmp/durable-979e1b9e/script.sh: 2: /var/jenkins_home#tmp/durable-979e1b9e/script.sh: and: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
sh is a dedicated command of jenkins-groovy. The first two work because $START/$END are the final string and doesn't try to replace something else.
sh "echo ${START} and ${END}" writing the variables like this will limit the GString and convert only the correct part and wont try to convert the "and" also.
For more have a look at this examples http://grails.asia/groovy-gstring-examples
I have a custom 'runner'-script that I need to use to run all of my terminal commands. Below you can see the general idea in the script.
#!/usr/bin/env bash
echo "Running '$#'"
# do stuff before running cmd
$#
echo "Done"
# do stuff after running cmd
I can use the script in bash as follows:
$ ./run.sh echo test
Running 'echo test'
test
Done
$
I would like to use it like this:
$ echo test
Running 'echo test'
test
Done
$
Bash has the trap ... DEBUG and PROMPT_COMMAND, which lets me execute something before and after a command, but is there something that would allow me to execute instead of the command?
There is also the command_not_found_handle which would work if I had an empty PATH env variable, but that seems too dirty.
After some digging, I ended up looking at the source code and found that bash does not support custom executors. Below is a patch to add a new handle that works similarly as the command_not_found_handler.
diff --git a/eval.c b/eval.c
index f02d6e40..8d32fafa 100644
--- a/eval.c
+++ b/eval.c
## -52,6 +52,10 ##
extern sigset_t top_level_mask;
#endif
+#ifndef EXEC_HOOK
+# define EXEC_HOOK "command_exec_handle"
+#endif
+
static void send_pwd_to_eterm __P((void));
static sighandler alrm_catcher __P((int));
## -172,7 +176,15 ## reader_loop ()
executing = 1;
stdin_redir = 0;
- execute_command (current_command);
+ SHELL_VAR *hookf = find_function (EXEC_HOOK);
+ if (hookf == 0) {
+ execute_command (current_command);
+ } elseĀ {
+ char *command_to_print = make_command_string (current_command);
+ WORD_LIST *og = make_word_list(make_word(command_to_print), (WORD_LIST *)NULL);
+ WORD_LIST *wl = make_word_list(make_word(EXEC_HOOK), og);
+ execute_shell_function (hookf, wl);
+ }
exec_done:
QUIT;
One can then define function command_exec_handle() { eval $1; } which will be executed instead of the original command given in the prompt. The original command is fully in the first parameter. The command_exec_handle can be given in .bashrc and it works as expected.
Notice: this is very dangerous! If you mess up and put a bad command_exec_handler in your .bashrc, you might end up with a shell that does not execute commands. It will be quite hard to fix without booting from a live cd.
It seems you have the same problem listed here. If you want to run some commands if your original command was not found, the Bash 4's command_not_found_handler will certainly fit your needs.
Try to be more specific, maybe with some code snippets that do or do not work, in order to help us to help you...