I created a small shell script which logs all of it's input to a log file, with which I had thought I could replace the sendmail binary and thus achieve a simple way to simulate e-mail delivery without actually setting up a working sendmail.
This failed, however. For reasons I cannot understand.
I've looked at the PHP mail.c source and as far as I can understand (mind you, I'm not very experienced in C), PHP executes and talks directly to the binary (set in sendmail_path). But no log files are being created when I replace the sendmail binary with my script and the script replacing sendmail will always create a log file when it's executed, regardless of if there's input present or not.
The script itself works fine. Its' return codes should conform to that of sendmail's. With the difference that my script always returns 0, regardless of the input, since I'm not really interested in checking if the input is valid - just that I'm getting some.
Is it possible to achieve what I want, i.e. using a sendmail simulator?
The script source is provided below:
#!/bin/bash
LOGDIR=/tmp/sendmail-sim
NOW=$(date +%Y%m%dT%H%M)
CNT=1
FILENAME="$LOGDIR/$NOW.$CNT.log"
while [ -f $FILENAME ]; do
CNT=$(($CNT + 1))
FILENAME="$LOGDIR/$NOW.$CNT.log"
done
echo "$0 $*" > $FILENAME
while read BUF
do
echo $BUF >> $FILENAME
done
exit 0
PS. My current sendmail (or actually, postfix) does receive email from PHP, but I don't want to actually send any email or need to go digging in its' mail queue in development.
The problem was User Error, as usually. So, boys and girls, don't forget to check write permissions on all the relevant folders.
I've used Fakemail for this purpose in the past. It accepts SMTP connections but writes all mail to files rather than sending them along as email. There are both python and perl implementations.
http://www.lastcraft.com/fakemail.php
We setup apache to serve the directory that Fakemail was writing to. That was a quick and easy way for staff to view the messages that Fakemail was receiving and review for content, destination, etc. Formatting of HTML emails was a bit whacky for various reasons, so it was not so useful for vetting formatting of html emails.
If you need to to test you PHP application's ability to properly format and send email without actually sending them, I suggest you use the Pear Mail package. Fiddling with your system is not a good idea.
If you use Mail from Pear you could switch from sendmail to smtp to a mock implementation of the mail interface by changing the driver from 'sendmail' or 'smtp' to 'mock'.
http://pear.php.net/package/Mail/docs/latest/Mail/Mail.html
http://pear.php.net/package/Mail/docs/latest/Mail/Mail_mock.html
If your code looks like this:
mail('me#example.com', 'My Subject', $message);
Then change it to be testable using PEAR Mail:
include('Mail.php');
function sendEmail($recipient, $subject, $body, $driver = 'mail') {
$m = Mail::factory("mail");
$headers = array(
"From"=>"me#example.com",
"To" => $recipient,
"Subject"=> $subject);
$m->send($recipient, $headers, $body);
return $m;
}
// In Production:
sendEmail('me#example.com', 'My Subject', $message);
// During testing:
$m = sendEmail('me#example.com', 'My Subject', $message, 'mock');
var_dump($m->sentMessages);
This is very crude, since you should be using PHPUnit or SimpleTest, but this is a topic for another time and place :)
A note, if you just want to grab stdin and write it into a file, you don't need to loop one line at a time: you can write
cat - >> $FILENAME
Related
We have a ksh script that gets executed daily. It does it function properly however the mail portion does not reflect the subject of the email.
The line of code which executes the mail command is shown below:
mail -s "[Notification]: Success (Date:$batchdate)" `cat $recipients` < $mail_file
Our logs show that the variables batchdate, recipients, and mail_file gets resolved correctly. Now I'm not sure why the subject doesn't show. Is there a different syntax for mail to reflect the subject?
I will run the following script:
#!/bin/bash
./myprogram
#get exit code
exitvalue=$?
#log exit code value to /var/log/messages
logger -s "exit code of my program is " $exitvalue
But I don't want log message to be written in /var/log/messages because I don't have root privileges. Instead I want it to be written to a file in my home directory: /home/myuser/mylog
How should I modify logger command above?
I don't think you really need to (or want to) involve logger/syslog for this. Simply replace the last line of the script with:
echo "Exit code of my program is $exitvalue" >> /some/file/that/you/can/write/to
The short "official" answer is that, unfortunately, you can't.
However, in most cases (e.g. on many Linux distros) you may just be lucky enough to have a logger implementation that both supports the --no-act option, and also implements some message formatting on its own (see below), in which case you can use a (moderately ugly) command like this to put a) a properly formatted message, b) to a file, c) not polluting the system logs:
logger --no-act -s "Oh dear..." 2>&1 | sed 's/^<[0-9]\+>//' >> /tmp/my.log
(Shout out to #BasileStarynkevitch, and #Kieveli, who both mentioned parts of it before, just not the whole story.)
Notes:
1) To match the usual log file format, I had to "sed off" the <PRIVAL> field (PRIVAL = FACILITY * 8 + PRIORITY) that got prepended to the output on my Debian boxes. Your mileage may vary.
2) POSIX itself does not define how exactly the logger command should treat (any of) its options. E.g. the GNU logger does not support --no-act at all.
Also, when posting the original version of this answer 2 years ago, -s on my system did not do any formatting to the printed output, it just echoed the raw message alone, rendering it completely useless. (I didn't use Systemd at that time, which might explain the difference, seeing various conditional Systemd-related calls in the logger source code, but this is just vague guesswork.)
3) The most likely reason why the logger command has been so "historically unfriendly" for this trivial use case is that it's just a fairly simple frontend to the system logger. This means that anything you feed to it basically goes right through to syslogd (or systemd-journald etc.), which, in turn, does all sorts of other further processing, and dispatching (to various outlets: files, sockets etc., both local and remote), as well as bridging permission levels (so in your example you could still log to syslog or user.log, for instance, even though you may have no permission to write to those files directly).
For logger to be able to properly log to a file directly, it would either have to (re)implement some of the duties of the system logger, and the syslog() std. library function, or it would be not much more capable than a trivial echo one-liner (less complex, perhaps, even than the above logger invocation! ;) ). Either way, that seems like a bad idea.
A better solution could be if the POSIX interface (logger, syslog()) had a way to specify an ad-hoc outlet/facility parameter (like a filename) along with the message, so one could log to custom files without reconfiguring the system (which normal users have no permission to do anyway).
However, for the lack of anything better, the "canonical" Linux logger implementation actually does seem to duplicate some of the syslog functionality, so most of us can just enjoy that luxury for free. ;)
If you want to use logger so that the message appears both in the system logs and in some file of yours, you might do
logger -s your message 2> $HOME/somefile
since the -s option to logger also outputs on stderr which is redirected to the file with 2>
You could want to use 2>> $HOME/somefile to append (not overwrite) your $HOME/somefile (read about bash redirections), and (see logger(1) for details) you may prefer to pass the program option --id=$$ to logger.
I think your better choice would be to use the date command rather then logger in cases where you don't want to write to the syslog files (and don't have privs to do so).
See "timestamp before an echo" for details on how to use date to prefix a message with a date and write it to a file.
You create a bash function that looks like the following, adjusting the date format string to get what you want:
echo_time() {
echo `date +'%b %e %R '` "$#"
}
In your bash script, you would then use:
echo_time "Your message here" >> ${LOGFILE}
Which would put the following in your ${LOGFILE} file:
Mar 11 08:40 your message here
$ man logger
Logger provides a shell command interface to the syslog(3) system log module.
You'll need to change your syslog configuration if you want it to log things to other places. You could establish a certain facility that has an output file in your home directory, for example. You would need to be root to do that, though.
You can create a small logger function like this on top of your script file:
#! /bin/bash
#LOG FILE location
log_file=/tmp/mylogfile.log
#the LOG function
LOG()
{
time=$(date '+%Y-%m-%d %H:%M:%S')
echo "$time"" >>> "$1 >>${log_file}
}
message= echo "test logger message"
#to send loggers to your log file use
LOG "my message logged to my log file with timestamp = ""$message"
check output :
head -1 /tmp/mylogfile.log
2019-09-16 14:17:46 >>> my message logged to my log file with timestamp = test logger message
I you can use cat - witch echoes your output then >> [file] witch prints the output to [file] inset of terminal so the command would be
cat - >> [file]
the down side is you have to use ctrl+C or ctrl+Z to exit logger is better for online code
I have some perl scripts which are scheduled using task scheduler in windows 2003 R2 and 2008. These scripts are called directly using perl.exe or via a batch file.
Sometimes these scripts fails to execute (crashes maybe) and we are not aware of these crashes.
Are there any ways a mail can be sent when these script crashes? more or less like monitoring of these scripts
Thanks in advance
Karthik
Why monitor the scripts from the outside when you can make the plugins to monitor theirself? First you can use eval in order to catch errors, and if an error occours you can send an email with the Net::SMTP module as rpg suggested. However I highly recommend you to use some kind of log file in order to keep trace of what happened right before the error and what caused the error. Your main goal should be to avoid the error. That ofcourse requires you to modify the scripts, if, for any reason, you cannot do that then the situation may be a little more complicated because you need another script.
With the Win32::Process::Info module you can retrieve running processes on Windows and check if your plugin is running or not.
while(1) {
my $found = false;
my $p = Win32::Process::Info->new;
foreach my $proc ($pi->GetProcInfo) {
if ($proc->{Name} =~ /yourscriptname/i ) {
found = true;
}
}
if ($found eq 'false') {
# send email
my $smtp = Net::SMTP->new("yoursmtpserver");
eval {
$smtp->mail("sender#test.it");
$smtp->recipient("recipient#test.it");
$smtp->data;
$smtp->datasend("From: sender#test.it");
$smtp->datasend("\n");
$smtp->datasend("To: recipient#test.it");
$smtp->datasend("\n");
$smtp->datasend("Subject: Plugin crashed!");
$smtp->datasend("\n");
$smtp->datasend("Plugin crashed!");
$smtp->dataend;
$smtp->quit;
};
}
sleep(300);
}
I did not test this code because I don't have Perl installed on Windows but the logic should be ok.
For monitoring - Please check the error code. This will help you for its failure.
For mail sending - You can use Net::SMTP module to send email. Let me know if you need a code snippet for it.
You can use PushMon to monitor your scripts. What you do is create PushMon URLs that matches the schedule of your Perl scripts. Then you should "ping" these URLs when your scripts run successfully. If these URLs are not accessed, maybe because your scripts crashed or there's a power failure, PushMon will notify you by email.
Disclaimer: I am associated with PushMon.
I'm using Net::SSH2's scp_put method to place one file in my home directory on a Unix server from a Windows box. I am using Strawberry Perl 5.12 (portable version). I installed the libssh2 1.2.5 binaries and then Net::SSH2 from cpan.
Here's my code snippet:
sub uploadToHost{
my $file=#_[0];
my $host=#_[1];
my $user=#_[2];
my $pass=#_[3];
my $remotelocation=#_[4];
#makes a new SSH2 object
my $ssh=Net::SSH2->new() or die "couldn't make SSH object\n";
#prints proper error messages
$ssh->debug(1);
#nothing works unless I explicitly set blocking on
$ssh->blocking(1);
print "made SSH object\n";
#connect to host; this always works
$ssh->connect($host) or die "couldn't connect to host\n";
print "connected to host\n";
#authenticates with password
$ssh->auth_password($user, $pass) or die "couldn't authenticate $user\n";
print "authenticated $user\n";
#this is the tricky bit that hangs
$ssh->scp_put($file, $remotelocation") or die "couldn't put file in $remotelocation\n";
print "uploaded $file successfully\n";
$ssh->disconnect or die "couldn't disconnect\n";
} #ends sub
Output (edited for anonymity):
made SSH object\n
connected to host\n
authenticated \n
libssh2_scp_send_ex(ss->session, path, mode, size, mtime, atime) -> 0x377e61c\n
Net::SSH2::Channel::read(size = 1, ext = 0)\n
It then hangs forever (>40 minutes in one test) and needs to be killed.
What's strange is that it actually does scp the file to the remote server! It only hangs after it should have completed. I couldn't find references to this curious problem elsewhere on StackOverflow or elsewhere.
Can anyone point me in the right direction to either 1) stop it from hanging, or 2) implement (as a workaround) a timer that kills this one command after a few seconds, which is enough time to scp the file?
Thanks, everyone!
You can try using alarm() to prod your process into behaving, if you save this example as 'alarm.pl' you can see how it works:
use strict;
use warnings;
use 5.10.0;
# pretend to be a slow process if run as 'alarm.pl s'
if (#ARGV && $ARGV[0] eq 's') {
sleep(30);
exit();
}
# Otherwise set an alarm, then run myself with 's'
eval {
local $SIG{ALRM} = sub {die "alarmed\n"};
alarm(5);
system("perl alarm.pl s");
};
if ($#) {
die $# unless $# eq "alarmed\n";
say "Timed out slow process";
}
else {
say "Slow process finished";
}
Use Net::SFTP::Foreign with the Net::SSH2 backend, Net::SFTP::Foreign::Backend::Net_SSH2:
use Net::SFTP::Foreign;
my $sftp = Net::SFTP::Foreign->new($host, user => $user, password => $password, backend => Net_SSH2);
$sftp->die_on_error("Unable to connect to remote host");
$sftp->put($file, $remotelocation);
$sftp->die_on_error("Unable to copy file");
If that doesn't work either, you can try using plink (from the PuTTY project) instead of the Net::SSH2 backend.
I don't think it is hanging it is just REALLY SLOW. 10x slower than what it should be. The reason the file would appear to be there is that it allocates the file before it has finished transferring. This isn't really too unexpected, Perl finds new ways to disappoint and frustrate programmers on a daily basis. Sometimes I think I spend more time working around Perl's idiosyncrasies and learning 10 slightly different ways to do the same thing than doing real work.
Ok, well I'm really pushing the bounds as to what shoes is for here, so I don't expect any miracles: is there a way to optionally run a shoes app without a gui?
The reason I'd like to do this is I'm creating a tool for use by "non computer people", as well as "computer people", who would rather just run the program as a command line tool, maybe even on systems without X/gtk installed. (I work as a multidisciplinary researcher, and shoes is great for focusing on tools and not fiddling with gui design all day.)
Here's some example code:
if(ARGV[1] == "nogui")
puts "running computation on #{ARGV[2]}";
exit();
end
Shoes.app(:width => 200, :height => 100) do
#button = button("Quit").click() {
exit();
}
end
which works except I get a
Gtk-CRITICAL **: gtk_main_quit: assertion `main_loops != NULL' failed
error.
I haven't tried it, but I don't know that Shoes will even happily start on a system without X. You're probably better off creating a shell script that chooses which version to start. Something like this:
#!/usr/bin/sh
NOGUI=0
if [ $# -gt 0 ]; then
NOGUI=$1
fi
if [ $NOGUI = nogui ]; then
shift
echo "Running in command-line mode..."
ruby command-line.rb "$#"
else
echo "Starting Shoes..."
shoes shoes.rb "$#"
fi
If the first argument is nogui, the remaining arguments are sent to the Ruby version, otherwise all arguments (including the first) are sent to Shoes.
Now you just need to separate the actual performance logic out so that it can be imported into both versions.