Can Cron Jobs Use Gnome-Open? - bash

I am running Ubuntu 11.10 (Unity interface) and I created a Bash script that uses 'gnome-open' to open a series of web pages I use every morning. When I manually execute the script in the Terminal, the bash script works just fine. Here's a sample of the script (it's all the same so I've shortened it):
#!/bin/bash
gnome-open 'https://docs.google.com';
gnome-open 'https://mail.google.com';
Since it seemed to be working well, I added a job to my crontab (mine, not root's) to execute every weekday at a specific time.
Here's the crontab entry:
30 10 * * 1,2,3,4,5 ~/bin/webcheck.sh
The problem is this error gets returned for every single 'gnome-open' command in the bash script:
GConf-WARNING **: Client failed to connect to the D-BUS daemon:
Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
GConf Error: No D-BUS daemon running
Error: no display specified
I did some searching to try and figure this out. The first thing I tried was relaunching the daemon using SIGHUP:
killall -s SIGHUP gconfd-2
That didn't work so I tried launching the dbus-daemon using this code from the manpage for dbus-launch:
## test for an existing bus daemon, just to be safe
if test -z "$DBUS_SESSION_BUS_ADDRESS" ; then
## if not found, launch a new one
eval `dbus-launch --sh-syntax --exit-with-session`
echo "D-Bus per-session daemon address is: $DBUS_SESSION_BUS_ADDRESS"
fi
But that didn't do anything.
I tried adding simply 'dbus-launch' at the top of my bash script and that didn't work either.
I also tried editing the crontab to include the path to Bash, because I saw that suggestion on another thread but that didn't work.
Any ideas on how I can get this up and running?

Here is how the problem was solved. It turns out the issue was primarily caused by Bash not having access to an X window session (or at least that's how I understood it). So my problem was solved by editing my crontab like so:
30 10 * * 1,2,3,4,5 export DISPLAY=:0 && ~/bin/webcheck.sh
The "export DISPLAY=:0" statement told cron which display to use. I found the answer on this archived Ubuntu forum after searching for "no display specified" or something like that:
http://ubuntuforums.org/archive/index.php/t-105250.html
So now, whenever I'm logged in, exactly at 10:30 my system will automatically launch a series of webpages that I need to look at every day. Saves me having to go through the arduous process of typing in my three-letter alias every time :)

Glad you asked!
It depends on when it is run.
If the Gnome GDM Greeter is live, you can use the DBUS session from the logon dialog, if you will. You can, e.g., use this to send notifications to the logon screen, if no-one is logged in:
function do_notification
{
for pid in $(pgrep gnome-session); do
unset COOKIE
COOKIE="$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/$pid/environ|cut -d= -f2-)"
GNUSER="$(ps --no-heading -o uname $pid)"
echo "Notifying user $GNUSER (gnome-session $pid) with '$#'"
sudo -u "$GNUSER" DBUS_SESSION_BUS_ADDRESS="$COOKIE" /usr/bin/notify-send -c "From CRON:" "$#"
done
unset COOKIE
}
As you can see the above code simply runs the same command (notify-send) on all available gnome-sessions, when called like:
do_notification "I wanted to let you guys know"
You can probably pick this apart and put it to use for your own purposes.

Related

Cron Job Running Shell Script to Run Python Not Working

As written in the title, I am having some problem with my cron job script not executing. I am using CentOS 7.
My crontab -e looks like this:
30 0 * * * /opt/abc/efg/cron_jobs.sh >> /opt/abc/logs/cron_jobs.log
My cron_jobs.sh looks like this:
#!/bin/bash
#keep this script in efg folder
#run this daily through crontab -e
#45 0 * * * /opt/abc/efg/cron_job.sh
cd "$(dirname "$0")"
export PYTHONPATH=$PYTHONPATH:`pwd`
#some daily jobs script for abc
date
#send email to users whose keys will expire 7 days later
/usr/local/bin/python2.7 scripts/send_expiration_reminder.py -d 7
#send email to key owners whos keys will expire
/usr/local/bin/python2.7 scripts/send_expiration_reminder.py -d -1
# review user follow status daily task
# Need to use venv due to some library dependencies
/opt/abc/virtualenv/bin/python2.7 scripts/review_user_status.py
So, what I've found is that the log for the cron jobs in /var/logs/cron states that the cron ran at 0:30 am accordingly.
Strangely, I find that /opt/abc/logs/cron_jobs.log empty, and the scripts does not seem to run at all. It used to output some log before I re-inputted the crontab (to re-install the cron jobs), and replaced cron_jobs.sh, so I think the problem might have arose from those actions.
And also, I would like to know if there are any ways to log the error from executing a python script. I have been trying to run /opt/abc/virtualenv/bin/python2.7 scripts/review_user_status.py but it never seem to work as intended (does not run the main function at all), and there is no log output whatsoever.
I tried to run this on a different machine and it works properly, so I am not sure what is wrong with the cron job.
Here is a snippet of the log I got from /var/log/cron to show that the cron called the job:
Mar 22 18:32:01 web41 CROND[20252]: (root) CMD (/opt/abc/efg/cron_jobs.sh >> /opt/abc/logs/cron_jobs.log)
There are a few areas to check if you haven't performed these already,
if your executable permissions set on the script,
chmod +x <python file>
in addition permissions for the user to access the directories.
Run the script manually to test the script works from beginning to end, as the user who will be running the script, will be more realistic.
You can test your crontab schedule by temporarily setting every minute for testing, unlike Windows where you can right, click and click Run.
First, thank you all for the suggestions and heads up. I found out that what was ruining my script is the existence of /r in the line break. Apparently, Linux in general does not accept /r and only accepts /n.
It is because I ftp my files to the machine where the script breaks. On the other hand, it works fine on another machine because I used git pull instead of ftp.
Hope that this info will also be a helpful to others!

Executing notify-send from fish script as cronjob

I am trying to call notify-send from a fish-script as a cronjob. Eventhough the script is being called by cron, the notification does not pop up on my display. I am not sure where it is failing, if notify-send is being executed at all, if it is a shell problem or some other problem. Executing the script in the terminal produces the expected (i.e. popup window) results
in crontab -e -u $USER:
SHELL=/bin/fish
* * * * * memcheck >> /tmp/cron.memcheck.log
running tail --follow /tmp/cron.memcheck.log shows that the script is being called, since it is echoing the debug output into the log file, but it tails to launch notify-send.
This is my (noobish) script:
# Defined in /home/mio/.config/fish/functions/memcheck.fish # line 2
function memcheck
set MEM_USED (free | string replace '3914132' '' | string match 'Mem: [ ]{1,}[0-9]{1,}' --regex | string match '\d$
#echo $MEM_USED
set MEM_CAP 3914132
set MEM_FREE (math $MEM_CAP - $MEM_USED)
echo $MEM_FREE
if test $MEM_FREE -lt 8700700
echo "WARNING: memory usage out of control. 21:10"
set DISPLAY :0.0
echo $DISPLAY
echo $USER
/usr/bin/notify-send "Memory Usage" $MEM_FREE --urgency=critical
end
end
I've read that in some instances notify-send cannot find the display and that setting $DISPLAY to :0.0 might do the trick. if I echo $DISPLAY in my terminal I get :0.0. Also echoing the $USER gives me my user name, which I expected since I ran cronjob -u mio -e and didn't edit /etc/crontab directly. Thanks for the time.
if I echo $DISPLAY in my terminal I get :0.0
Yes, but your cronjob doesn't run in your terminal.
In Unix, environment variables are passed from parent processes to their children when they're started.
The fish inside your terminal is a child of that terminal, which has $DISPLAY set to contact X.
But your cronjobs are run by your cron daemon, which is typically a child of your init process, which in turn doesn't have any parent. So it inherits the environment of init.
Set $DISPLAY in your script. This isn't pretty (and I can't say I like the approach of having a cronjob that sends notifications to begin with), but it should work, at least if you have the typical setup with one X server.
Note that fish is entirely irrelevant in this case - it would happen no matter what you picked as shell.
Some plausible alternatives (though I've not looked into them far):
Run a watch job in a terminal or via your DE's autostart mechanism. This just reruns things every X seconds, but has $DISPLAY
Use systemd's timer stuff, in particular as a user. There's a command to "upload" an environment variable to systemd, so it can then use it in timers.

linux script to send me an email every time a log file changes

I am looking for a simple way to constantly monitor a log file, and send me an email notification every time thhis log file has changed (new lines have been added to it).
The system runs on a Raspberry Pi 2 (OS Raspbian /Debian Stretch) and the log monitors a GPIO python script running as daemon.
I need something very simple and lightweight, don't even care to have the text of the new log entry, because I know what it says, it is always the same. 24 lines of text at the end.
Also, the log.txt file gets recreated every day at midnight, so that might represent another issue.
I already have a working python script to send me a simple email via gmail (called it sendmail.py)
What I tried so far was creating and running the following bash script:
monitorlog.sh
#!/bin/bash
tail -F log.txt | python ./sendmail.py
The problem is that it just sends an email every time I execute it, but when the log actually changes, it just quits.
I am really new to linux so apologies if I missed something.
Cheers
You asked for simple:
#!/bin/bash
cur_line_count="$(wc -l myfile.txt)"
while true
do
new_line_count="$(wc -l myfile.txt)"
if [ "$cur_line_count" != "$new_line_count" ]
then
python ./sendmail.py
fi
cur_line_count="$new_line_count"
sleep 5
done
I've done this a bunch of different ways. If you run a cron job every minute that counts the number of lines (wc -l) compares that to a stored count (e.g. in /tmp/myfilecounter) and sends the emails when the numbers are different.
If you have inotify, there are more direct ways to get "woken up" when the file changes, e.g https://serverfault.com/a/780522/97447 or https://serverfault.com/search?q=inotifywait.
If you don't mind adding a package to the system, incron is a very convenient way to run a script whenever a file or directory is modified, and it looks like it's supported on raspbian (internally it uses inotify). https://www.linux.com/learn/how-use-incron-monitor-important-files-and-folders. Looks like it's as simple as:
sudo apt-get install incron
sudo vi /etc/incron.allow # Add your userid to this file (or just rm /etc/incron.allow to let everyone use incron)
incron -e # Add the following line to the "cron" file
/path/to/log.txt IN_MODIFY python ./sendmail.py
And you'd be done!

Why ftam service will start and return prompt from terminal but not from bash script?

I am starting ftam server (ft820.rc on CentOS 5) using bash version bash 3.0 and I am having an issue with starting it from the script, namely in the script I do
ssh -nq root#$ip /etc/init.d/ft820.rc start
and the script won't continue after this line, although when I do on the machine defined by $ip
/etc/init.d/ft820.rc start
I will get the prompt back just after the service is started.
This is the code for start in ft820.rc
SPOOLPATH=/usr/spool/vertel
BINPATH=/usr/bin/osi/ft820
CONFIGFILE=${SPOOLPATH}/ffs.cfg
# Set DBUSERID to any value at all. Just need to make sure it is non-null for
# lockclr to work properly.
DBUSERID=
export DBUSERID
# if startup requested then ...
if [ "$1" = "start" ]
then
mask=`umask`
umask 0000
# startup the lock manager
${BINPATH}/lockmgr -u 16
# update attribute database
${BINPATH}/fua ${CONFIGFILE} > /dev/null
# clear concurrency locks
${BINPATH}/finit -cy ${CONFIGFILE} >/dev/null
# startup filestore
${BINPATH}/ffs ${CONFIGFILE}
if [ $? = 0 ]
then
echo Vertel FT-820 Filestore running.
else
echo Error detected while starting Vertel FT-820 Filestore.
fi
umask $mask
I repost here (on request of #Patryk) what I put in the comments on the question:
"is it the same when doing the ssh... in the commandline? ie, can you indeed connect without entering a password, using the pair of private_local_key and the corresponding public_key that you previously inserted in the destination root#$ip:~/.ssh/authorized_keys file ? – Olivier Dulac 20 hours ago "
"you say that, at the commandline (and NOT in the script) you can ssh root#.... and it works without asking for your pwd ? (ie, it can then be run from a script?) – Olivier Dulac 20 hours ago "
" try the ssh without the '-n' and even without -nq at all : ssh root#$ip /etc/init.d/ft820.rc start (you could even add ssh -v , which will show you local (1:) and remote (2:) events in a very verbose way, helping in knowing where it gets stuck exactly) – Olivier Dulac 19 hours ago "
"also : before the "ssh..." line in the script, make another line with, for example: ssh root#ip "set ; pwd ; id ; whoami" and see if that works and shows the correct information. This may help be sure the ssh part is working. The "set" part will also show you the running shell (ex: if it contains BASH= , you're running bash. Otherwise SHELL=... should give a good hint (sometimes not correct) about which shell gets invoked) – Olivier Dulac 19 hours ago "
" please try without the '-n' (= run in background and wait, instead of just run and then quit). It it doesn't work, try adding -t -t -t (3 times) to the ssh, to force it to allocate a tty. But first, please drop the '-n'. – Olivier Dulac 18 hours ago "
Apparently what worked was to add the -t option to the ssh command. (you can go up to put '-t -t -t' to further force it to try to allocate the tty, depending on the situation)
I guess it's because the invoked command expected to be run within an interactive session, and so needed a "tty" to be the stdout
A possibility (but just a wild guess) : the invoked rc script outputs information, but in a buffered environment (ie, when not launched via your terminal), the calling script couldn't see enough lines to fill the buffer and start printing anything out (like when you do a "grep something | somethings else" in a buffered environment and ctrl+c before the buffer was big enough to display anything : you end up thinking no lines were foudn by the grep, whereas there was maybe a few lines already in the buffer). There is tons to be said about buffering, and I am just beginning to read about it all. forcing ssh to allocate a tty made the called command think it was outputting to a live terminal session, and that may have turned off the buffering and allowed the result to show. Maybe in the first case, it worked too, but you could never see the output?

Can a bash script tell if it's being run via cron?

Not having much luck Googling this question and I thought about posting it on SF, but it actually seems like a development question. If not, please feel free to migrate.
So, I have a script that runs via cron every morning at about 3 am. I also run the same scripts manually sometimes. The problem is that every time I run my script manually and it fails, it sends me an e-mail; even though I can look at the output and view the error in the console.
Is there a way for the bash script to tell that it's being run through cron (perhaps by using whoami) and only send the e-mail if so? I'd love to stop receiving emails when I'm doing my testing...
you can try "tty" to see if it's run by a terminal or not. that won't tell you that it's specifically run by cron, but you can tell if its "not a user as a prompt".
you can also get your parent-pid and follow it up the tree to look for cron, though that's a little heavy-handed.
I had a similar issue. I solved it with checking if stdout was a TTY. This is a check to see if you script runs in interactive mode:
if [ -t 1 ] ; then
echo "interacive mode";
else
#send mail
fi
I got this from: How to detect if my shell script is running through a pipe?
The -t test return true if file descriptor is open and refers to a terminal. '1' is stdout.
Here's two different options for you:
Take the emailing out of your script/program and let cron handle it. If you set the MAILTO variable in your crontab, cron will send anything printed out to that email address. eg:
MAILTO=youremail#example.com
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
Set an environment variable in your crontab that is used to determine if running under cron. eg:
THIS_IS_CRON=1
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
and in your script something like
if [ -n "$THIS_IS_CRON" ]; then echo "I'm running in cron"; else echo "I'm not running in cron"; fi
Why not have a command line argument that is -t for testing or -c for cron.
Or better yet:
-e=email#address.com
If it's not specified, don't send an email.
I know the question is old, but I just came across the same problem. This was my solution:
CRON=$(pstree -s $$ | grep -q cron && echo true || echo false)
then test with
if $CRON
then
echo "Being run by cron"
else
echo "Not being run by cron"
fi
same idea as the one that #eruciform mentioned - follows your PID up the process tree checking for cron.
Note: This solution only works specifically for cron, unlike some of the other solutions, which work anytime the script is being run non-interactively.
What works for me is to check $TERM. Under cron it's "dumb" but under a shell it's something else. Use the set command in your terminal, then in a cron-script and check it out
if [ "dumb" == "$TERM" ]
then
echo "cron"
else
echo "term"
fi
I'd like to suggest a new answer to this highly-voted question. This works only on systemd systems with loginctl (e.g. Ubuntu 14.10+, RHEL/CentOS 7+) but is able to give a much more authoritative answer than previously presented solutions.
service=$(loginctl --property=Service show-session $(</proc/self/sessionid))
if [[ ${service#*=} == 'crond' ]]; then
echo "running in cron"
fi
To summarize: when used with systemd, crond (like sshd and others) creates a new session when it starts a job for a user. This session has an ID that is unique for the entire uptime of the machine. Each session has some properties, one of which is the name of the service that started it. loginctl can tell us the value of this property, which will be "crond" if and only if the session was actually started by crond.
Advantages over using environment variables:
No need to modify cron entries to add special invocations or environment variables
No possibility of an intermediate process modifying environment variables to create a false positive or false negative
Advantages over testing for tty:
No false positives in pipelines, startup scripts, etc
Advantages over checking the process tree:
No false positives from processes that also have crond in their name
No false negatives if the script is disowned
Many of the commands used in prior posts are not available on every system (pstree, loginctl, tty). This was the only thing that worked for me on a ten years old BusyBox/OpenWrt router that I'm currently using as a blacklist DNS server. It runs a script with an auto-update feature. Running from crontab, it sends an email out.
[ -z "$TERM" ] || [ "$TERM" = "dumb" ] && echo 'Crontab' || echo 'Interactive'
In an interactive shell the $TERM-variable returns the value vt102 for me. I included the check for "dumb" since #edoceo mentioned it worked for him. I didn't use '==' since it's not completely portable.
I also liked the idea from Tal, but also see the risk of having undefined returns. I ended up with a slightly modified version, which seems to work very smooth in my opinion:
CRON="$( pstree -s $$ | grep -c cron )"
So you can check for $CRON being 1 or 0 at any time.

Resources