script from cron doesn't run - bash

I have a script:
-rwx------. 1 root root 135 Oct 15 12:00 /backup/purge.sh
#!/bin/bash
volume=`echo "list volumes" | bconsole|grep -i "Append\|Full"|awk '{print $4}'`
echo "purge volume=$volume yes" | bconsole
If I run it manually it runs.
If I put the script to crontab it doesn't run, however the log says it ran.
Oct 15 16:07:01 sdfdsfdsf CROND[36326]: (root) CMD (/backup/purge.sh)
The schedule:
07 16 * * * /backup/purge.sh
If I run manually:
/backup/purge.sh
Connecting to Director weewr:9101
1000 OK: 1 werewrewrewr Version: 7.0.5 (28 July 2014)
Enter a period to cancel a command.
purge volume=Vol-0001 yes
This command can be DANGEROUS!!!
It purges (deletes) all Files from a Job,
JobId, Client or Volume; or it purges (deletes)
all Jobs from a Client or Volume without regard
to retention periods. Normally you should use the
PRUNE command, which respects retention periods.
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
1 File on Volume "Vol-0001" purged from catalog.
There are no more Jobs associated with Volume "Vol-0001". Marking it purged.

bconsole hasn't been in the PATH so I used full path for the bconsole command like this:
!/bin/bash
volume=echo "list volumes" | /sbin/bconsole|grep -i "Append\|Full"|awk '{print $4}'
echo "purge volume=$volume yes" | /sbin/bconsole

Related

Get file size not working in scheduled job

I have a bash script running on Ubuntu 18.04. I scheduled it using SYSTEMD timer.
#!/bin/bash
backupdb(){
/usr/bin/mysqldump -u backupuser -pbackuppassword --add-locks --extended-insert --hex-blob $1 > /opt/mysqlbackup/$1.sql
/bin/gzip -c /opt/mysqlbackup/$1.sql > /opt/mysqlbackup/$1-$(date +%A).sql.gz
rm -rf /opt/mysqlbackup/$1.sql
echo `date "+%h %d %H:%M:%S"`": " $1 "- Size:" `/usr/bin/stat -c%s "${1}-$(date +%A).sql.gz"` >> /opt/mysqlbackup/backupsql.log
}
# List of databases to backup
backupdb cardb
backupdb bikedb
When I run this script interactively, the backup log get 2 entries:
Jun 16 20:15:03: cardb - Size: 200345
Jun 16 20:15:12: bikedb - Size: 150123
However, when this is run as a SYSTEMD timer service, the log still gets 2 entries but no file size is given in the log file. Not 0, it's simply blank. The backup file, cardb.sql.gz is created and is non-zero. I can unzip it and it does contain a valid SQL file.
I can't figure out why this is happening.
You need to specify the absolute path of your file
Without specifying the absolute path you are making the assumption that the systemd timer is running your script from the same directory you tested it from. To remedy this, you can either use the absolute path or change directories before accessing your file.
echo `date "+%h %d %H:%M:%S"`": " $1 "- Size:" `/usr/bin/stat -c%s "/opt/mysqlbackup/${1}-$(date +%A).sql.gz"` >> /opt/mysqlbackup/backupsql.log

Bash script for monitoring logs based upon last update time

I have a directory on a RHEL 6 server where logs are being written as below. As you can see there are 4 logs already written within 1 minute. I just want to write a script which can check in every 15 minute (Cron ) & if log files are not updating then send an email alert like " Adapter is in hang status, Restart Required". I know basic linux commands & knowledge of crons. This is how i am trying
-rw-r--r-- 1 root root 11M Oct 6 00:32 Adapter.log.3
-rw-r--r-- 1 root root 11M Oct 6 00:32 Adapter.log.2
-rw-r--r-- 1 root root 10M Oct 6 00:32 Adapter.log.1
-rw-r--r-- 1 root root 6.3M Oct 6 00:32 Adapter.log
$ ll Adapter.log >/tmp/test.txt
$ cat test.txt | awk '{print $6,$7,$8}'
Oct 6 03:10
Now how can i get the time of same log file after 15 minutes, so that i can compare the time difference and write a script to send the alert.
Given description, looks like you timestamp can be checked every 15 minutes.
If file was updated in last 15 minutes, do nothing
If file was updated 15 to 30 minutes ago, send email alert
If file was updated 30 minutes ago, do nothing, as error was already reported on previous cycle
Consider placing the following into cron, on 15 minute interval:
find /path/to/log/Adapter.log* -mmin +15 -mmin -30 | xargs -L1 send-alert
This solution will work on most situations. However, it's worth noting that if the system load is very high, cron execution may be delayed, impacting the age test. In those cases, extra file to store the last test time is needed.

Mac os x terminal mail: send multiple outputs in one mail

I have a backupscript that executes every 2 weeks with cron on my mac os high sierra.
And that part works and now I want to mail the log to myself using these 2 lines:
df -Ph /Volumes/USB_Storage >> "/Users/ralphschipper/Documents/Logs/rsync"date +"%Y-%m-%d".log
cat "/Users/ralphschipper/Documents/Logs/rsync"date +"%Y-%m-%d".log | /usr/bin/mail -s "Backuplog" user#gmail.com
the thing is: my backup starts at 10:00 pm september 15 so the logfile is created on the 15th
The backup was ready at 1:00 am september 16 so a new logfile is created.
At the end the mail was send using the logfile that contains the df command from the 16th.
does anyone now how to fix this?
can I create a variable at the begin of the proces that stores the current date and use that?
or can I send a mail that sends the logfile and the df results?
Regards,
Ralph
Store the date you want to use (and do the same with the complete filename).
backupdate=$(date +"%Y-%m-%d")
backupfile="/Users/ralphschipper/Documents/Logs/rsync${backupdate}.log"
df -Ph /Volumes/USB_Storage >> "${backupfile}"
cat "${backupfile}" | /usr/bin/mail -s "Backuplog of ${backupdate}" user#gmail.com

SGE submitted job state doesn't change from "qw"

I'm using Sun Grid Engine on ubuntu 14.04 to queue my jobs to be run on a multicore CPU.
I've installed and set up SGE on my system. I created a "hello_world" dir which contains two shell scripts namely "hello_world.sh" & "hello_world_qsub.sh", first one including a simple command and second one including qsub command to submit the first script file as a job to be run.
Here's what "hello_world.sh" includes:
#!/bin/bash
echo "Hello world" > /home/theodore/tmp/hello_world/hello_world_output.txt
And here's what "hello_world_qsub.sh" includes:
#!/bin/bash
qsub \
-e /home/hello_world/hello_world_qsub.error \
-o /home/hello_world/hello_world_qsub.log \
./hello_world.sh
after giving permission to the second sh file and running it with "./hello_world_qsub.sh" command from the specified dir, the output is reasonable:
Your job 1 ("hello_world.sh") has been submitted
But the output of "qstat" command is frustrating:
job-ID prior name user state submit/start at queue slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
1 0.50000 hello_worl mhr qw 05/16/2016 20:26:23 1
And the "state" column always remains on "qw" and never changes to "r".
Here's the output of "qstat -j 1" command:
==============================================================
job_number: 1
exec_file: job_scripts/1
submission_time: Mon May 16 20:26:23 2016
owner: mhr
uid: 1000
group: mhr
gid: 1000
sge_o_home: /home/mhr
sge_o_log_name: mhr
sge_o_path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
sge_o_shell: /bin/bash
sge_o_workdir: /home/mhr/hello_world
sge_o_host: localhost
account: sge
stderr_path_list: NONE:NONE:/home/hello_world/hello_world_qsub.error
mail_list: mhr#localhost
notify: FALSE
job_name: hello_world.sh
stdout_path_list: NONE:NONE:/home/hello_world/hello_world_qsub.log
jobshare: 0
env_list:
script_file: ./hello_world.sh
scheduling info: queue instance "mainqueue#localhost" dropped because it is temporarily not available
All queues dropped because of overload or full
And here's the output of "qhost" command:
HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS
-------------------------------------------------------------------------------
global - - - - - - -
localhost - - - - - - -
What should I do to make my jobs run and finish their task?
From your qhost output, it looks like your machine "localhost" is properly configured in SGE. However, on "localhost" sge_execd is either not running or not configured properly. If it were, qhost would report statistics for "localhost".

I cannot create a good log file

Found the Solution !!!!
After a gob of Googling, I found this in a forum from a person asking " How to: Add or display today’s date from a shell script"
This is what I did
I added the following to the beginning of my ftp script
#!/bin/bash
TODAY=$(date)
HOST=$(hostname)
echo "--------------------------------------------"
echo "This script was run: $TODAY ON HOST:$HOST "
echo "--------------------------------------------"
# below is original code minus the #!/bin/sh
#
cd /folder where csv files are/
ftp -v -i -n 111.222.333.444 <<EOF
user mainuser dbuser
mput phas*.csv
bye
EOF
Now my log, on each cron event of the ftp'ing, show:
This script was run: Tue Nov 12 11:16:02 EST 2013 ON MyServer's HostName>
On the crontab, I changed the entry for logging to include 2 >> so the log is appended and not re-written:
16 11 * * * /srv/phonedialer_tmp/ftp-date.sh &>> /srv/phonedialer_tmp/ftp-date.log
I found a way to create a log file of daily ftp's by searching here:
./ftp_csv.sh 2>&1 > ftp_csv.log
I works great in that is records each time the cronjob runs. However, what I cannot find is a way to insert the date/time of each event. As you can see below, it records the transferring of the files.
is there a way I can somehow add the date/timestamp to the beginning or end of each recorded event within the log file?
[stevek#localhost phonedialer_tmp]$ cat ftp_csv.log
Connected to 1.2.3.4 (1.2.3.4).
220 Microsoft FTP Service
331 Password required for mainuser.
230 User mainuser logged in.
221
Connected to 1.2.3.4 (1.2.3.4).
220 Microsoft FTP Service
331 Password required for mainuser.
230 User mainuser logged in.
221
Connected to 1.2.3.4 (1.2.3.4).
220 Microsoft FTP Service
331 Password required for mainuser.
230 User mainuser logged in.
221 ETC
Thanks so much for any information

Resources