Send Mail using cron job and shell script - shell

I am using cron job to run my shell script after every 2 minutes. My shell script contains pig and hive scripts. I am searching the person with high risk using my hive query and i can get their email id from my hive table, i want to know if i can send mail to the person and how ? I checked on the internet but not able to understand properly, it would be a great help if you guys help me in this. Thanks

This code solves my problem
$ echo "hello world" | mail -s "a subject" xxx#xxx.com

Related

Get echoed cron text sent to email

I have 2 digital clean servers - one is a few years old, one is new.
On the old server, any cron jobs that run with an echo, the echoed content is emailed to me. I'm pretty sure this happened by default - I didn't configure this myself.
On the new server, echoed content is not emailed. I have tried to send an email using the following, which worked fine, so my understanding is that email is running ok.
echo "This is the body of the email" | mail -s "This is the subject line" "me#myemail.com"
Can anyone tell me if there's a specific option for this, or if I'm missing something?
All crons email any output on stdout or stderr to the user owning the crontab.
If you don't see that mail you
look in the wrong mailbox
have something odd in $HOME/.forward
have a glitch in your crontab (odd MAILTO perhaps)
have a command that doesn't produce any output

KSH Script Formatting With Mailx

Alright, so I have been asked to help with this script. Basically what it needs to do is email a user if there is an error associated with an ID. Now I won't go into too much detail on that because the sql portion does that correctly. What I don't know how to do is pull out each user email from the sql individually and email them one at a time then attach the error ticket associated with it to the email body.
My KSH script as I have it now.
#!/usr/bin/ksh
# sets all environment variables
. /data/pmes/pmes/pmes_scripts/pmes_Env_Var.config
# sets the results of the .sql file to the emailBody Variable
emailBody=`sqlplus -s $USERID/$PPASSWD#$DATABASE #/data/pmes/pmes/pmes_scripts/testingmail2.sql`
# these email addresess are emailed when the script completes
MAIL_ID='emailme#yoursite.com'
(echo "$emailBody")|mailx -s "Test Email`" $MAIL_ID
Output of echo $emailBody
aaron.heckel#yoursite.com 20140801_BR_Bob,D_PZXKGX steve.naman#yoursite.com 20140816_AM_Andrew,D_PZXKGX
(It is basically email then the issue ID. Each item has a space in between them)
Any help would be appreciated. I'm very new with mailing and sqlplus in UNIX.

How get information of completed PBS or Torque jobs?

I have IDs of completed jobs. How do I check its detailed information, such as execution time, allocated nodes, etc? I remember SGE has a command for it (qacct?). But I could not find it for PBS or Torque. Thanks.
Since job accounting requires root access to view completed jobs, or that the cluster admins have installed pbstools (both out of the control of a user), I've found that the easiest thing to do is to place a
tracejob $PBS_JOBID
on the last line of the submission script. If the scheduler is MAUI, then checkjob -vv $PBS_JOBID is another alternative. These commands could be redirected to a separate outfile:
tracejob $PBS_JOBID > $PBS_O_WORKDIR/$PBS_JOBID.tracejob
Should also be possible to have this run as a user epilog script to make it more reusable from job to job.
I was looking at this thread searching how to do this in my HPC running PBSPro 19.2.3 and as of PBSPro 18 the solution is similar to John Damm Sørensen's reply, but the -w flag is used instead of -1 to display output of each field in a single line and you need to add -x flag to see the details of finished jobs as well, so you don't need to run it within the job script. (p.203, section 2.59.2.2 of the Reference Guide)
qstat -fxw $PBS_JOBID
You can then grep out of it the requested information, such as resources used, Exit status, etc:
qstat -fxw $PBS_JOBID | grep -E "resources_used|Exit_status|array_index"
For Torque, you can check at least part of the information you seek using the "tracejob" command.
Official documentation:
http://docs.adaptivecomputing.com/torque/Content/topics/11-troubleshooting/usingTracejobToLocateFailures.htm
One thing you should notice is that this tool is a convenience that parses the logs. By default it will only check the last day. Be sure to read the doc for the "-n" option.
On a Torque based system. I find that the best way to get stats from a job is to add this to the end of the submitted job script. The output will be added to the STDOUT file.
qstat -f -1 $PBS_JOBID
Right now the only way to get this in TORQUE is to look at the accounting logs. You can grep for the job id and view the accounting records for the job, which look like this:
04/30/2014 15:20:18;Q;5000.bob;queue=batch
04/30/2014 15:33:00;S;5000.bob;user=dbeer group=dbeer jobname=STDIN queue=batch ctime=1398892818 qtime=1398892818 etime=1398892818 start=1398893580 owner=dbeer#bob exec_host=bob/0
04/30/2014 15:36:20;E;5000.bob;user=dbeer group=dbeer jobname=STDIN queue=batch ctime=1398892818 qtime=1398892818 etime=1398892818 start=1398893580 owner=dbeer#bob exec_host=bob/0 session=22933 end=1398893780 Exit_status=0 resources_used.cput=00:00:00 resources_used.mem=2580kb resources_used.vmem=37072kb resources_used.walltime=00:03:20
Unfortunately, to do this directly you have to have root access. To get around this, there are tools such as pbsacct that help better browse this. pbsacct is part of the pbstools package, which is where that link takes you.

Capture job id of a job submitted by qsub

I have been looking for a simple way to capture the job ID of a job submitted by qsub. I saw a suggestion was given by providing a name to the job, and using that name. But that's an indirect method. I tried this way but getting an error
jobID="qsub job.sh"
35546.cell0 (This is the output I want to capture)
$jobID
qsub -W depend=afterok:$jobID analyze.sh
Can anyone please suggest a neat way to capture the job ID from qsub?
Thank you very much.
You may try
qsub -W depend=afterok:$(qsub job.sh) analyze.sh

postfix pipe mail to script does not work

I've done my research and tried lots of way but to no avail, i still could not get my postfix mail to run the script.
content of /etc/aliases
test2: "|/home/testscript.sh"
content of /home/testscript.sh Note: i've tried many kind of ways in the script. even a simple echo does not work.
#!/bin/sh
read msg
echo $MSG
i've tried running the script and it works fine.
So would you tell that it's working?
Even if you successfully direct mail to the script, you're not going to see the output of the "echo" command. If you expect to get an email response from the script, the script will need to call out to /bin/mail (or sendmail or contact an SMTP server or something) to generate the message. If you're just looking to verify that it's working, you need to create some output where you can see it -- for example, by writing the message to the filesystem:
#!/bin/sh
cat > /tmp/msg
You should also look in your mail logs (often but not necessarily /var/log/mail) to see if there are any errors (or indications of success!).

Resources