I have a Jenkins's master-slave structure setup created having Master on windows server, plus few windows slaves and one Mac-slave.
The flow is like this,
Jenkins shell script triggers a shell command (sh sample.command) [this is used on both windows(using win-bash) and mac node.....]
The first step where it triggers the shell script is working fine on both windows and mac slave.
#!/bin/bash
echo “This is a shell script acting as a middleware to trigger the NAnt....”
echo "Calling NAnt...."
nant ${1} ${2} ${3} ${4}
2.Now, the sample.command has a code to trigger a nant command, which is not working on mac slave and giving me an error :
nant: command not found
3.The NAnt is installed on the Mac-slave through brew and when I trigger this shell script sample.command from the Mac machine, it works fine and executes the nant command, but doesn't work through jenkins.
Any help would be appreciated, thanks in advance.
I was able to solve this by setting up the $PATH variable at the beginning of the shell script. Just added the below line in the shell script,
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/share/dotnet/bin
The Paths mentioned here might be different on other machines, what I did is, I checked the $PATH while calling the shell from Mac machine and copy-pasted and that worked.
Related
I am trying to run a bash script from Groovy in Jenkins but I am not able to find the lucky commands.
I can run this and it creates my "RJ" directory:
process = "mkdir /app/jenkins/workspace/TEST/RJ"
println process.execute()
But when I try to run my bash it is not creating my output file. I am able to run this bash script on the server directly and it is creating my expected output file.
process = "/app/jenkins/workspace/TEST/info_file.sh"
println process.execute()
why run it through groovy and not directly via a ssh command.
If you do want to do via groovy, you'll need to add a ssh libray, and do the whole connection, auth, execute. process.execute won't run a linux box.
first, you don't check the stderr and not waiting for process to end.
similar problem was here:
Curl request from command line and via Groovy script
with error text it's easier to solve the error.
I'm trying to create a job in Jenkins that will execute a simple shell script. However, it seems that Jenkins isn't actually doing anything with my script. No matter what value I put into the Execute Shell Command section, Jenkins always says that it passes. Even if I put in a bogus filename like "RandomBogusFilename.sh" it'll say the job was a success.
Can anyone point out what I'm doing wrong and how I can get Jenkins to actually use my shell script?
The shell script, the job config, and the console output are all shown below. I'm currently trying to do this on a Windows Server 2008 R2 Standard machine.
Thanks.
My .sh file
File Name: surveyToolRequest.sh
File Location: /jobs/Jeff Shell Script Test/workspace
Description:
Hit a web address and retrieve the HTTP Response. Then print out the HTTP Response.
#!/bin/bash
response_code=$(curl -s -o /dev/null -w "%{http_code}" http://SOME-WEBSITE.COM)
echo "The response code is " $response_code
My Jenkins Job Config
Jenkins Console Output
I played with this and found that it worked if I specified the path to the script. If the script is in your job's workspace directory,
./surveyToolRequest.sh
should work as Jenkins looks for files relative to the root of the workspace.
It's more common to just put the contents of the script file directory into the job configuration; that way you can see what the job is doing and you'll avoid problems like this one.
You should use run "Execute windows batch command" and not "Execute Shell"
I'm running Lighttpd on Cygwin. I have a Lua CGI script that calls a BASH script which calls notepad.exe. My actual problem is running a C# application but I've tried to simplify the problem with notepad for now.
When I call the CGI web page, I get the error: notepad.exe: command not found
But when I run the BASH from the Cygwin shell, notepad runs fine with no error.
It looks like the Path is being cleaned when lighttpd is running. How do I make sure the environment is the same?
CGI (LUA):
#!/usr/bin/lua
cmd = "/opt/abc/scripts/test.sh"
local f = io.popen( cmd.." ; echo RC=$?" )
assert(f)
local str = f:read'*a'
f:close()
print ("Content-type: Text/html\n")
print ("<br><b>Output</b>: ", str)
print ("</body></html>")
BASH:
#!/bin/sh
echo "Test.sh"
echo "<br>PATH<br> $PATH<hr>"
notepad.exe 2>&1
Did you try invoking with bash -l?
also - whats wrong with setting the path in your script?
(don't have a cygwin machine handy to test)
Lighttpd was being started by windows task scheduler on system startup and didn't need a user to be logged in. This meant that the server was being started in windows 'Session 0' which is marked as non-interactive. More info on Windows Sessions
My solution was to throw a simple batch file into the startup folder that would start lighttpd. Alternatively I could have created a cygwin service that automatically starts and ensure that the interact with desktop option is checked.
I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately.
After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit?
Thanks for the help,
Ryan
With set -e, the script will stop at the first command which gives a non-zero exit status. This does not necessarily mean that you will see an error message.
Here is an example, using the false command which does nothing but exit with an error status.
Without set -e:
$ cat test.sh
#!/bin/sh
false
echo Hello
$ ./test.sh
Hello
$
But the same script with set -e exits without printing anything:
$ cat test2.sh
#!/bin/sh
set -e
false
echo Hello
$ ./test2.sh
$
Based on your observations, it sounds like your script is failing for some reason (presumably related to the different environment, as Jim Lewis suggested) before it generates any output.
To debug, add set -x to the top of the script (as well as set -e) to show commands as they are executed.
When your script runs under cron, the environment variables and path may be set differently than when the script is run directly by a user. Perhaps that's why it behaves differently?
To test this: create a new script that does nothing but printenv and echo $PATH.
Run this script manually, saving the output, then run it as a cron job, saving that output.
Compare the two environments. I am sure you will find differences...an interactive
login shell will have had its environment set up by sourcing a ".login", ".bash_profile",
or similar script (depending on the user's shell). This generally will not happen in a
cron job, which is usually the reason for a cron job behaving differently from running
the same script in a login shell.
To fix this: At the top of the script, either explicitly set the environment variables
and PATH to match the interactive environment, or source the user's ".bash_profile",
".login", or other setup script, depending on which shell they're using.
I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful