i have a simple "free-style" test job in huson.
it checks out a file from git (it does that part successfully)
it is also supposed to exec a script that appends to that file.
the script looks like:
#!/bin/sh -ex
echo "$0 was run on " `date` >> /tmp/failme.log
#echo "$0 was run on " `date` >> $HUDSON_HOME/failme.log
echo "this should fail"
echo "this went to stderr" >&2
exit -1
I put the 2nd line to test if the script is even run.
/tmp/failme.log is missing after a "successful" build
i can run the script as the hudson user (after allowing it to login) and the script behave properly.
I'm at a loss. I've read several inquiries here and in other forums and blogs about using hudson variables in scripts. none of them talk about anything special that i have to do to get hudson to exec the script.
thanks in advance.
nodnarb (strike that, reverse it)
yes, i'm answering my own question.
I attempted the same configuration with a new job, and the script runs as expected. I have no reason for this. I have attempted to dup the failure above 3 times, and cannot duplicate this issue. So, I am resolving this issue. maybe something "hiccuped" when the original job was created.
Thank you to all who commented.
B
Related
I made simple script. file name is sutest.
#!/bin/bash
cd ~/Downloads/redis-4.0.1/src
./redis-server
echo "uid is ${UID}"
echo "user is ${USER}"
echo "username is ${USERNAME}"
I runed script.$ . sutest
But, script code is stopped at ./redis-server.
So I can't see echo messages.
I want to make this kind of script files. How can I do that??
I would be appreciate your help.
Let's say more general case.
myscript1 file executes process like redis-server above.
another myscript2 file executes process like redis-server above.
another myscript3 file executes process like redis-server above.
How can I run three script files simultaneously??
I want to do job in ssh connection.
To make the matter worse, If I can't use screen or tmux??
Add a '&' char at the end of the row
./redis-server &
this char permits to run in backgroud the job, and the script continues.
Just do the echos first:
cd ~/Downloads/redis-4.0.1/src
echo "uid is ${UID}"
echo "user is ${USER}"
echo "username is ${USERNAME}"
exec ./redis-server
The use of exec is a small trick (which you can omit if you prefer): it replaces the shell script with redis-server, so the shell script is no longer running at all. Without exec, you end up with the shell script waiting around for redis-server to finish, which is unnecessary if the script will do nothing further.
If you don't like that for some reason, you can keep the original order:
cd ~/Downloads/redis-4.0.1/src
./redis-server & # run in background
echo "uid is ${UID}"
echo "user is ${USER}"
echo "username is ${USERNAME}"
wait # optional
I need help with some scripts I'm writing.
Scenario:
Script A is executed by a scheduling process. This script takes the arguments passed to it, parses them in some way and runs script B feeding it with those arguments;
Script B does sudo -u user ssh user#REMOTEMACHINE, runs some commands (in the remote machine) and finally runs script C (also in the remote machine). I am passing those commands using a HERE DOCUMENT. Also, I'm passing the previous arguments to this script too.
This "flow" runs correctly and the job completes successfully.
My problems are:
Since this "flow" is ran by a scheduling process, I need to tell it if the job completed successfully or not. I'm doing this via exit codes, so what I want is to have a chain of exit codes, returning back from the last script to the first, in case of errors. I'm not able to perform this, because exit codes works correctly for the single scripts (I tried executing them singularly and look for the exit codes), but they are not sended back to the parent script. In my opinion, the problem is that ssh is getting the exit code from the child script, which in fact ended successfully, because there was no error executing it: it's the command inside of it that gone wrong.
While the process works correctly, I still get this line:
ssh: Could not resolve hostname : Name or service not known
But actually the script completes successfully.
I hope you understand what I wrote, I can eventually post my scripts here.
Thanks
O.
EDIT:
This are the scripts. There could be some problem with variable names because I renamed it quikly to upload the files.
Since I can't upload 3 files because of my low reputation, I merged them in a single file
SCRIPT FILE
I managed to solve the problem.
I followed olivier's advice and used the escape char to make the variable expanded by the remote machine.
Also I implemented different exit codes based on where the error occured.
At last, I modified the first script as follows, after launching sudo -u for the second script:
EXITCODEOFTHESECONDSCRIPT=$?
if [ $EXITCODEOFTHESECONDSCRIPT = 0 ]
then
echo ""
echo "Export job took $SECONDS seconds."
echo ""
exit 0
else
exit $EXITCODEOFTHESECONDSCRIPT
fi
This way I am able to exit the main script MAINTAINING the exit code provided from the second script.
In fact, I found that the problem was that the process worked well, even in case of errors, but the fact that I was giving more commands after the second script fail (the echo command was enough) provided other exit codes that overwrited the one I wanted.
Thanks to all !
I am new to shelll script and my question here might be very basic.
I have a script(.sh file) which is making call to couple of scipt files. now suppose, I am getting error message on execution of script 1, I would like to abort the script and completely terminate the script flow. It should not go on next step.
could you please tell me how i can achieve this.
EDIT
My scriptA is making call to Script B, which internally making call to some other scriptC.
if execution of StopServer1.py script(part of script B) failed, flow should terminate here itself and should not come to StopServer2 and StopServer3 and control goes to Script A. which should also terminate .
please let me know if set -e will help here.
cd /usr/oracle/WSAutomate/
java weblogic.WLST /usr/oracle/StopServer1.py >> $logFileName
java weblogic.WLST /usr/oracle/StopServer2.py >> $logFileName
java weblogic.WLST /usr/oracle/StopServer3.py >> $logFileName
You can use:
set -e
for aborting the script on error.
main script snippet:
#!/bin/bash
set -e
# Any failure in these script calls will make main script to exit immediately
./script1
./script2
./script3
You can use the operator && this way <execute_script1> && <execute_script2>.
It executes script1 and, only if everything goes right, it executes script2. In case of error script2 will not run.
Not having much luck Googling this question and I thought about posting it on SF, but it actually seems like a development question. If not, please feel free to migrate.
So, I have a script that runs via cron every morning at about 3 am. I also run the same scripts manually sometimes. The problem is that every time I run my script manually and it fails, it sends me an e-mail; even though I can look at the output and view the error in the console.
Is there a way for the bash script to tell that it's being run through cron (perhaps by using whoami) and only send the e-mail if so? I'd love to stop receiving emails when I'm doing my testing...
you can try "tty" to see if it's run by a terminal or not. that won't tell you that it's specifically run by cron, but you can tell if its "not a user as a prompt".
you can also get your parent-pid and follow it up the tree to look for cron, though that's a little heavy-handed.
I had a similar issue. I solved it with checking if stdout was a TTY. This is a check to see if you script runs in interactive mode:
if [ -t 1 ] ; then
echo "interacive mode";
else
#send mail
fi
I got this from: How to detect if my shell script is running through a pipe?
The -t test return true if file descriptor is open and refers to a terminal. '1' is stdout.
Here's two different options for you:
Take the emailing out of your script/program and let cron handle it. If you set the MAILTO variable in your crontab, cron will send anything printed out to that email address. eg:
MAILTO=youremail#example.com
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
Set an environment variable in your crontab that is used to determine if running under cron. eg:
THIS_IS_CRON=1
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
and in your script something like
if [ -n "$THIS_IS_CRON" ]; then echo "I'm running in cron"; else echo "I'm not running in cron"; fi
Why not have a command line argument that is -t for testing or -c for cron.
Or better yet:
-e=email#address.com
If it's not specified, don't send an email.
I know the question is old, but I just came across the same problem. This was my solution:
CRON=$(pstree -s $$ | grep -q cron && echo true || echo false)
then test with
if $CRON
then
echo "Being run by cron"
else
echo "Not being run by cron"
fi
same idea as the one that #eruciform mentioned - follows your PID up the process tree checking for cron.
Note: This solution only works specifically for cron, unlike some of the other solutions, which work anytime the script is being run non-interactively.
What works for me is to check $TERM. Under cron it's "dumb" but under a shell it's something else. Use the set command in your terminal, then in a cron-script and check it out
if [ "dumb" == "$TERM" ]
then
echo "cron"
else
echo "term"
fi
I'd like to suggest a new answer to this highly-voted question. This works only on systemd systems with loginctl (e.g. Ubuntu 14.10+, RHEL/CentOS 7+) but is able to give a much more authoritative answer than previously presented solutions.
service=$(loginctl --property=Service show-session $(</proc/self/sessionid))
if [[ ${service#*=} == 'crond' ]]; then
echo "running in cron"
fi
To summarize: when used with systemd, crond (like sshd and others) creates a new session when it starts a job for a user. This session has an ID that is unique for the entire uptime of the machine. Each session has some properties, one of which is the name of the service that started it. loginctl can tell us the value of this property, which will be "crond" if and only if the session was actually started by crond.
Advantages over using environment variables:
No need to modify cron entries to add special invocations or environment variables
No possibility of an intermediate process modifying environment variables to create a false positive or false negative
Advantages over testing for tty:
No false positives in pipelines, startup scripts, etc
Advantages over checking the process tree:
No false positives from processes that also have crond in their name
No false negatives if the script is disowned
Many of the commands used in prior posts are not available on every system (pstree, loginctl, tty). This was the only thing that worked for me on a ten years old BusyBox/OpenWrt router that I'm currently using as a blacklist DNS server. It runs a script with an auto-update feature. Running from crontab, it sends an email out.
[ -z "$TERM" ] || [ "$TERM" = "dumb" ] && echo 'Crontab' || echo 'Interactive'
In an interactive shell the $TERM-variable returns the value vt102 for me. I included the check for "dumb" since #edoceo mentioned it worked for him. I didn't use '==' since it's not completely portable.
I also liked the idea from Tal, but also see the risk of having undefined returns. I ended up with a slightly modified version, which seems to work very smooth in my opinion:
CRON="$( pstree -s $$ | grep -c cron )"
So you can check for $CRON being 1 or 0 at any time.
I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately.
After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit?
Thanks for the help,
Ryan
With set -e, the script will stop at the first command which gives a non-zero exit status. This does not necessarily mean that you will see an error message.
Here is an example, using the false command which does nothing but exit with an error status.
Without set -e:
$ cat test.sh
#!/bin/sh
false
echo Hello
$ ./test.sh
Hello
$
But the same script with set -e exits without printing anything:
$ cat test2.sh
#!/bin/sh
set -e
false
echo Hello
$ ./test2.sh
$
Based on your observations, it sounds like your script is failing for some reason (presumably related to the different environment, as Jim Lewis suggested) before it generates any output.
To debug, add set -x to the top of the script (as well as set -e) to show commands as they are executed.
When your script runs under cron, the environment variables and path may be set differently than when the script is run directly by a user. Perhaps that's why it behaves differently?
To test this: create a new script that does nothing but printenv and echo $PATH.
Run this script manually, saving the output, then run it as a cron job, saving that output.
Compare the two environments. I am sure you will find differences...an interactive
login shell will have had its environment set up by sourcing a ".login", ".bash_profile",
or similar script (depending on the user's shell). This generally will not happen in a
cron job, which is usually the reason for a cron job behaving differently from running
the same script in a login shell.
To fix this: At the top of the script, either explicitly set the environment variables
and PATH to match the interactive environment, or source the user's ".bash_profile",
".login", or other setup script, depending on which shell they're using.