cron: how to add a fixed time offset (delay) to ssh Command - time

I did some Google searches but could not find this, is there a way to delay ANY command into SSH?
As an example, I want to start a Crontab tomorrow but I want to set it up right now
crontab mycron.txt

one dinky way to do it is to put a sleep && in front of it like this:
sleep 600 && echo "10 minutes later!"

There is at command in Linux, which does exactly what you need. Something like this should do the job:
echo "your command" | at 11:00

Related

Trying to add the following with crontab results in a "bad minute" error on line 2

I'm at a loss for what I think may be a simple syntax error. What on line two is causing crontab to throw a "bad minute?" Thanks in advance for the help.
#!/bin/bash
if pgrep -fx "plexdrive mount -v 3 --chunk-check-threads=16 --chunk-
load-threads=16 --chunk-load-ahead=16 --max-chunks=256 /home/username/files/Google/" > /dev/null
then
echo "Plexdrive is running."
else
echo "Plexdrive is not running, starting Plexdrive"
fusermount -uz /home/username/files/Google/
screen -dmS plexdrive plexdrive mount -v 3 --chunk-check-threads=16 --chunk-load-threads=16 --chunk-load-ahead=16 --max-chunks=256 /home/username/files/Google/
fi
exit
The command: pgrep -fx "plexdrive mount -v 3 --chunk-check-threads=16 --chunk-load-threads=16 --chunk-load-ahead=16 --max-chunks=256 /home/username/files/Google/"
runs perfectly fine directly from the command line (returns the process number), so I'm pretty sure I'm just not understanding how to write a logic statement correctly.
Note: The server is remote and I'm merely a user. I have the ability to add to cron but not to services - hence this approach to solving the problem of ensuring that plexdrive (via fuse) always keeps this mount point alive.
You should read up on what a crontab should look like. Not bash source, in any case. It's a configuration file to start (programs and) bash scripts, not to contain bash script.
A crontab line contains the following fields:
minute,
hour,
day of month,
month,
day of week,
each of which specifying when to run the command, and
the command to run.
I.e., if you want your script to run at five minutes after each full hour, and your script is named "my_check_script" (and in PATH), the crontab line should look something like this:
5 * * * * my_check_script
Check the linked documentation for more details.

Solaris logadm log rotation

I would lie to understand how logadm works.
So by looking at the online materials I wrote a small script which redirects a date into a log file and sleeps for 1 second. this will run in a infinite loop.
#!/usr/bin/bash
while true
do
echo `date` >>/var/tmp/temp.log
sleep 1
done
After this I had executed below commands:
logadm -w /var/tmp/temp.log -s 100b
logadm -V
My intention from above commands is log(/var/tmp/temp.log) should be rotated for every 100 bytes.
But after setting these , when I run the script in background, I see the log file is not rotated.
# ls -lrth /var/tmp/temp.log*
-rw-r--r-- 1 root root 7.2K Jun 15 08:56 /var/tmp/temp.log
#
As I understand, you have to call it to do work, e.g. from crontab or manually like logadm -c /var/tmp/temp.log (usually placecd in crontab)
Sidenode: you could simply write date >> /var/tmp/temp.log w/o echo.
This not how I would normally do this, plus I think you may have misunderstood the -w option.
The -w option updates /etc/logadm.conf with the parameters on the command line, and logadm is then run at 10 minutes past 3am (on the machine I checked).
I took your script and ran it, then ran:
"logadm -s 100b /var/tmp/temp.log"
and it worked fine. Give it a try! :-)

checking if a streaming server is up using bash?

I use Ubuntu and am trying to write a script that makes the following:
-test if an audio stream works
-if not, send an email.
I have tried the following code (running as a cron job every 10 minutes), which 'works' if I supply the wrong pw e.g.(it sends an email then), but does nothing if the actual server is down (tested by killing the server). any ideas on how to fix the script?
Thanks in advance!
#!/bin/bash
#servertest.sh
username=user1
password=xyz
url="http://wwww.streamingaudioserver.com -passwd $password -user $username"
mplayer $url &
sleep 5
test=$(pgrep -c mplayer)
if [ $test = 0 ]; then
#server is down!
mailfile="downmail.txt"
/usr/sbin/ssmtp test#maildomain.com < "/home/test/$mailfile"
fi
killall mplayer
sleep 5
exit
Your problem is in this line:
$mailfile="downmail.txt"
remove the dollar sign and that should do it.
You should be getting error messages in your cron log or emails to the crontab owner complaining about a command not found or no such file.
Edit:
Does your script work if run from the command line (with the stream down) rather than cron?
Try using set -x (or #!/bin/bash -x) in the script to turn on tracing or use echo "PID: $$, value of \$test: $test" > /tmp/script.out after the assignment to see if you're getting the zero you're expecting.
Also, try an ssmtp command outside the if to make sure it's working (but I think you already said it is under some circumstances).
Try your script without ever starting mplayer.

Can a bash script tell if it's being run via cron?

Not having much luck Googling this question and I thought about posting it on SF, but it actually seems like a development question. If not, please feel free to migrate.
So, I have a script that runs via cron every morning at about 3 am. I also run the same scripts manually sometimes. The problem is that every time I run my script manually and it fails, it sends me an e-mail; even though I can look at the output and view the error in the console.
Is there a way for the bash script to tell that it's being run through cron (perhaps by using whoami) and only send the e-mail if so? I'd love to stop receiving emails when I'm doing my testing...
you can try "tty" to see if it's run by a terminal or not. that won't tell you that it's specifically run by cron, but you can tell if its "not a user as a prompt".
you can also get your parent-pid and follow it up the tree to look for cron, though that's a little heavy-handed.
I had a similar issue. I solved it with checking if stdout was a TTY. This is a check to see if you script runs in interactive mode:
if [ -t 1 ] ; then
echo "interacive mode";
else
#send mail
fi
I got this from: How to detect if my shell script is running through a pipe?
The -t test return true if file descriptor is open and refers to a terminal. '1' is stdout.
Here's two different options for you:
Take the emailing out of your script/program and let cron handle it. If you set the MAILTO variable in your crontab, cron will send anything printed out to that email address. eg:
MAILTO=youremail#example.com
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
Set an environment variable in your crontab that is used to determine if running under cron. eg:
THIS_IS_CRON=1
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job
and in your script something like
if [ -n "$THIS_IS_CRON" ]; then echo "I'm running in cron"; else echo "I'm not running in cron"; fi
Why not have a command line argument that is -t for testing or -c for cron.
Or better yet:
-e=email#address.com
If it's not specified, don't send an email.
I know the question is old, but I just came across the same problem. This was my solution:
CRON=$(pstree -s $$ | grep -q cron && echo true || echo false)
then test with
if $CRON
then
echo "Being run by cron"
else
echo "Not being run by cron"
fi
same idea as the one that #eruciform mentioned - follows your PID up the process tree checking for cron.
Note: This solution only works specifically for cron, unlike some of the other solutions, which work anytime the script is being run non-interactively.
What works for me is to check $TERM. Under cron it's "dumb" but under a shell it's something else. Use the set command in your terminal, then in a cron-script and check it out
if [ "dumb" == "$TERM" ]
then
echo "cron"
else
echo "term"
fi
I'd like to suggest a new answer to this highly-voted question. This works only on systemd systems with loginctl (e.g. Ubuntu 14.10+, RHEL/CentOS 7+) but is able to give a much more authoritative answer than previously presented solutions.
service=$(loginctl --property=Service show-session $(</proc/self/sessionid))
if [[ ${service#*=} == 'crond' ]]; then
echo "running in cron"
fi
To summarize: when used with systemd, crond (like sshd and others) creates a new session when it starts a job for a user. This session has an ID that is unique for the entire uptime of the machine. Each session has some properties, one of which is the name of the service that started it. loginctl can tell us the value of this property, which will be "crond" if and only if the session was actually started by crond.
Advantages over using environment variables:
No need to modify cron entries to add special invocations or environment variables
No possibility of an intermediate process modifying environment variables to create a false positive or false negative
Advantages over testing for tty:
No false positives in pipelines, startup scripts, etc
Advantages over checking the process tree:
No false positives from processes that also have crond in their name
No false negatives if the script is disowned
Many of the commands used in prior posts are not available on every system (pstree, loginctl, tty). This was the only thing that worked for me on a ten years old BusyBox/OpenWrt router that I'm currently using as a blacklist DNS server. It runs a script with an auto-update feature. Running from crontab, it sends an email out.
[ -z "$TERM" ] || [ "$TERM" = "dumb" ] && echo 'Crontab' || echo 'Interactive'
In an interactive shell the $TERM-variable returns the value vt102 for me. I included the check for "dumb" since #edoceo mentioned it worked for him. I didn't use '==' since it's not completely portable.
I also liked the idea from Tal, but also see the risk of having undefined returns. I ended up with a slightly modified version, which seems to work very smooth in my opinion:
CRON="$( pstree -s $$ | grep -c cron )"
So you can check for $CRON being 1 or 0 at any time.

I want to make a conditional cronjob

I have a cron job that runs every hour. It accesses an xml feed. If the xml feed is unvailable (which seems to happen once a day or so) it creates a "failure" file. This "failure" file has some metadata in it and is erased at the next hour when the script runs again and the XML feed works again.
What I want is to make a 2nd cron job that runs a few minutes after the first one, looks into the directory for a "failure" file and, if it's there, retries the 1st cron job.
I know how to set up cron jobs, I just don't know how to make scripting conditional like that. Am I going about this in entirely the wrong way?
Possibly. Maybe what you'd be better off doing is having the original script sleep and retry a (limited) number of times.
Sleep is a shell command and shells support looping so it could look something like:
for ((retry=0;retry<12;retry++)); do
try the thing
if [[ -e my_xlm_file ]]; then break; fi
sleep 300
# five minutes later...
done
As the command to run, try:
/bin/bash -c 'test -e failurefile && retrycommand -someflag -etc'
It runs retrycommand if failurefile exists
Why not have your set your script touch a status file when it has successfully completed. Have it run every 5 minutes, and have the first check of the script be to see if the status file is less then 60 minutes old, and if it is young, then quit, if it is old, then fetch.
I agree with MarkusQ that you should retry in the original job instead of creating another job to watch the first job.
Take a look at this tool to make retrying easier: https://github.com/kadwanev/retry
You can just wrap the original cron in a retry very easily and the final existence of the failure file would indicate if it failed even after the retries.
If somebody will need a bash script to ping an endpoint (for example, run scheduled API tasks via cron), retry it, if the response status was bad, then:
#!/bin/bash
echo "Start pinch.sh script.";
# run 5 times
for ((i=1;i<=5;i++))
do
# run curl to do a POST request to https://www.google.com
# silently flush all its output
# get the response status code as a bash variable
http_response=$(curl -o /dev/null -s -w "%{response_code}" https://www.google.com);
# check for the expected code
if [ $http_response != "200" ]
then
# process fail
echo "The pinch is Failed. Sleeping for 5 minutes."
# wait for 300 seconds, then start another iteration
sleep 300
else
# exit from the cycle
echo "The pinch is OK. Finishing."
break;
fi
done
exit 0

Resources