Is it possible to identify, if a Linux shell script is executed by a user or a cronjob?
If yes, how can i identify/check, if the shell script is executed by a cronjob?
I want to implement a feature in my script, that returns some other messages as if it is executed by a user. Like this for example:
if [[ "$type" == "cron" ]]; then
echo "This was executed by a cronjob. It's an automated task.";
else
USERNAME="$(whoami)"
echo "This was executed by a user. Hi ${USERNAME}, how are you?";
fi
One option is to test whether the script is attached to a tty.
#!/bin/sh
if [ -t 0 ]; then
echo "I'm on a TTY, this is interactive."
else
logger "My output may get emailed, or may not. Let's log things instead."
fi
Note that jobs fired by at(1) are also run without a tty, though not specifically by cron.
Note also that this is POSIX, not Linux- (or bash-) specific.
Related
I have written a simple script
get-consent-to-continue.sh
echo Would you like to continue [y/n]?
read response
if [ "${response}" != 'y' ];
then
exit 1
fi
I have added this script to ~/.bashrc as an alias
~/.bashrc
alias getConsentToContinue="source ~/.../get-consent-to-continue.sh"
My goal is to be able to call this from another script
~/.../do-stuff.sh
#!/usr/bin/env bash
# do stuff
getConsentToContinue
# do other stuff IF given consent, ELSE stop execution without closing terminal
Goal
I want to be able to
bash ~/.../do-stuff.sh
And then, when getConsentToContinue is called, if I respond with anything != 'y', then do-stuff.sh stops running without closing the terminal window.
The Problem
When I run
bash ~/.../do-stuff.sh
the alias is not accessible.
When I run
source ~/.../do-stuff.sh
Then the whole terminal closes when I respond with 'n'.
I just want to cleanly reuse this getConsentToContinue script to short-circuit execution of whatever script happens to be calling it. It's just for personal use when automating repetitive tasks.
A script can't force its parent script to exit, unless you source the script (since it's then executing in the same shell process).
Use an if statement to test how getConsentToContinue exited.
if ! getConsentToContinue
then
exit 1
fi
or more compactly
getConsentToContinue || exit
You could pass the PID of the calling script
For instance say you have a parent script called parent.sh:
# do stuff
echo "foo"
check_before_proceed $$
echo "bar"
Then, your check_before_proceed script would look like:
#!/bin/sh
echo Would you like to continue [y/n]?
read response
if [ "${response}" != 'y' ];then
kill -9 $1
fi
The $$ denotes the PID of the parent.sh script itself, you could find the relevant docs here. When we pass $$ as a parameter to the check_before_proceed script, then we would have access to the PID of the running parent.sh via the positional parameter$1 (see positional parameters)
Note: in my example, the check_before_proceed script would need to be accessible on $PATH
I want to call a program when any SSH user logs in that prints a welcome message. I did this by editing the /etc/ssh/sshrc file:
#!/bin/bash
ip=`echo $SSH_CONNECTION | cut -d " " -f 1`
echo $USER logged in from $ip
For simplicity, I replaced the program call with a simple echo command in the example
The problem is, I learned SCP is sensitive to any script that prints to stdout in .bashrc or, apparently, sshrc. My SCP commands failed silently. This was confirmed here: https://stackoverflow.com/a/12442753/2887850
Lots of solutions offered quick ways to check if the user is in an interactive terminal:
if [[ $- != *i* ]]; then return; fi link
Fails becase [ is not linked
case $- in *i* link
Fails because in is not recognized?
Use tty program (same as above)
tty gave me a bizarre error code when executed from sshrc
While all of those solutions could work in a normal BASH environment, none of them work in the sshrc file. I believe that is because PATH (and I suspect a few other things) aren't actually available when executing from sshrc, despite specifying BASH with a shebang. I'm not really sure why this is the case, but this link is what tipped me off to the fact that sshrc is running in a limited environment.
So the question becomes: is there a way to detect interactive terminal in the limited environment that sshrc executes in?
Use test to check $SSH_TTY (final solution in this link):
test -z $SSH_TTY || echo $USER logged in from $ip
I have one script with common functions that is included in other my scripts with:
. ~/bin/fns
Since my ~/bin path is on the PATH, is there a way to prevent users to execute fns from command line (by returning from the script with a message), but to allow other scripts to include this file?
(Bash >= 4)
Just remove the executable bit with chmod -x . ~/bin/fns. It will still work when sourced, but you can't call it (accidentally) by its name anymore.
Some scripts at my workplace use a special shebang
#!/bin/echo Run:.
which returns
Run:. <pathname>
when you use it as a command.
Add the following at the beginning of the script you want to be only allowed to be sourced:
if [ ${0##*/} == ${BASH_SOURCE[0]##*/} ]; then
echo "WARNING"
echo "This script is not meant to be executed directly!"
echo "Use this script only by sourcing it."
echo
exit 1
fi
This will check if the current script and executed shell script file basenames match. If they match, then obviously you are executing it directly so we print a message and exit with status 1.
if (return 0 2>/dev/null) ; then
:
else
echo "Error: script was executed."
exit 1
fi
I use SSH Secure Shell client to connect to a server and run my scripts.
I want to stop a script on some conditions, so when I use exit, not only the script stops, but all the client disconnects from the server!, Here is the code:
if [[ `echo $#` -eq 0 ]]; then
echo "Missing argument- must to get a friend list";
exit
fi
for user in $*; do
if [[ !(-f `echo ${user}.user`) ]]; then
echo "The user name ${user} doesn't exist.";
exit
fi
done
A picture of the client:
Why is this happening?
You use source to run the script, this runs it in the current shell. That means that exit terminates the current shell and with that the ssh session.
replace source with bash and it should work, or better put
#!/bin/bash
on to of the file and make it executable.
exit returns from the current shell - If you've started a script by running it directly, this will exit the shell that the script is running in.
return returns from a function or sourced file (TY Dennis Williamson) - Same thing, but it doesn't terminate your current shell.
break returns from a loop - Similar to return, but can be used anywhere within a loop to stop processing more items. This is probably what you want.
if you are running from the current shell, exit will obviously exit from the shell and disconnect you. try running it in a new shell ( use a . before the script) or else use 'return' instead of exit
Not having much luck Googling this question and I thought about posting it on SF, but it actually seems like a development question. If not, please feel free to migrate.
So, I have a script that runs via cron every morning at about 3 am. I also run the same scripts manually sometimes. The problem is that every time I run my script manually and it fails, it sends me an e-mail; even though I can look at the output and view the error in the console.
Is there a way for the bash script to tell that it's being run through cron (perhaps by using whoami) and only send the e-mail if so? I'd love to stop receiving emails when I'm doing my testing...
you can try "tty" to see if it's run by a terminal or not. that won't tell you that it's specifically run by cron, but you can tell if its "not a user as a prompt".
you can also get your parent-pid and follow it up the tree to look for cron, though that's a little heavy-handed.
I had a similar issue. I solved it with checking if stdout was a TTY. This is a check to see if you script runs in interactive mode:
if [ -t 1 ] ; then
echo "interacive mode";
else
#send mail
fi
I got this from: How to detect if my shell script is running through a pipe?
The -t test return true if file descriptor is open and refers to a terminal. '1' is stdout.
Here's two different options for you:
Take the emailing out of your script/program and let cron handle it. If you set the MAILTO variable in your crontab, cron will send anything printed out to that email address. eg:
MAILTO=youremail#example.com
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
Set an environment variable in your crontab that is used to determine if running under cron. eg:
THIS_IS_CRON=1
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
and in your script something like
if [ -n "$THIS_IS_CRON" ]; then echo "I'm running in cron"; else echo "I'm not running in cron"; fi
Why not have a command line argument that is -t for testing or -c for cron.
Or better yet:
-e=email#address.com
If it's not specified, don't send an email.
I know the question is old, but I just came across the same problem. This was my solution:
CRON=$(pstree -s $$ | grep -q cron && echo true || echo false)
then test with
if $CRON
then
echo "Being run by cron"
else
echo "Not being run by cron"
fi
same idea as the one that #eruciform mentioned - follows your PID up the process tree checking for cron.
Note: This solution only works specifically for cron, unlike some of the other solutions, which work anytime the script is being run non-interactively.
What works for me is to check $TERM. Under cron it's "dumb" but under a shell it's something else. Use the set command in your terminal, then in a cron-script and check it out
if [ "dumb" == "$TERM" ]
then
echo "cron"
else
echo "term"
fi
I'd like to suggest a new answer to this highly-voted question. This works only on systemd systems with loginctl (e.g. Ubuntu 14.10+, RHEL/CentOS 7+) but is able to give a much more authoritative answer than previously presented solutions.
service=$(loginctl --property=Service show-session $(</proc/self/sessionid))
if [[ ${service#*=} == 'crond' ]]; then
echo "running in cron"
fi
To summarize: when used with systemd, crond (like sshd and others) creates a new session when it starts a job for a user. This session has an ID that is unique for the entire uptime of the machine. Each session has some properties, one of which is the name of the service that started it. loginctl can tell us the value of this property, which will be "crond" if and only if the session was actually started by crond.
Advantages over using environment variables:
No need to modify cron entries to add special invocations or environment variables
No possibility of an intermediate process modifying environment variables to create a false positive or false negative
Advantages over testing for tty:
No false positives in pipelines, startup scripts, etc
Advantages over checking the process tree:
No false positives from processes that also have crond in their name
No false negatives if the script is disowned
Many of the commands used in prior posts are not available on every system (pstree, loginctl, tty). This was the only thing that worked for me on a ten years old BusyBox/OpenWrt router that I'm currently using as a blacklist DNS server. It runs a script with an auto-update feature. Running from crontab, it sends an email out.
[ -z "$TERM" ] || [ "$TERM" = "dumb" ] && echo 'Crontab' || echo 'Interactive'
In an interactive shell the $TERM-variable returns the value vt102 for me. I included the check for "dumb" since #edoceo mentioned it worked for him. I didn't use '==' since it's not completely portable.
I also liked the idea from Tal, but also see the risk of having undefined returns. I ended up with a slightly modified version, which seems to work very smooth in my opinion:
CRON="$( pstree -s $$ | grep -c cron )"
So you can check for $CRON being 1 or 0 at any time.