I'm trying to use a Bash script to take certain log entries and generate a more user friendly notification email via exec with swatch. However, what I'm finding is when swatch picks up the corresponding log file lines to match for a specific application, the Bash script is erroring out with sh: script: No such file or directory.
This seems to be due to the log line in question outputting something like:
[2017-05-22 20:00:41] somehost someapp[3999]: INFO: <script>: bad stuff happened bruh
I've tested the script out with output from things like rsyslog to /var/log/messages and secure, which don't cause problems. This specific application I'm trying to make these notifications for is only problematic because the log lines themselves include <script>, which I can't exclude. Specifically, this seems to trip up with just <script as I've messed around with removing chars in those log lines that might be interpreted as something besides text.
Any ideas on how to not get the Bash script to try to interpret <script> as a file/directory in the original log? It's even fine if the suggested answer is to simply rip that out of the line. I've tried using sed -i 's/\<script\>//g ${#} to remove <script> to strip and store to a temp variable with the intention of doing something like: echo -e "${plainenglish}\n\nThe original log message is:${logline}" >> $outputfile but I get the same error noted above.
Edit: More info. The problem application in question is Kamailio, where most of the logging from routing execution is written by the xlog module. By default, xlog shoves <script> in front of everything you log. The module does include a parameter override (modparam) for prefix, which defaults to <script>.
I realize this is a bit specific for question and solution, but here's a summary followed by suggestion:
-Using Kamailio (SIP server/proxy, etc.) with xlog configuration logging calls to some logfile (e.g.: /var/log/kamailio)
-Using swatch or some tool to grab specific Kamailio logs with the intent of notifying, such as routing failure or suspicious requests received (e.g.: +18928751123#1.1.1.1:5060)
-Using Bash shell script to sanitize/normalize the corresponding log line in some manner for dispatch.
The solution: set xlog's prefix modparam to something that will not be interpreted by Bash as a legal command (script is a legal command) when attempting to process the log lines.
It seems Bash processing gets thrown astray as soon as it hits the <script> line in the default xlog produced logfile lines.
Related
I want to know how I can see exactly what the cron jobs are doing on each execution. Where are the log files located? Or can I send the output to my email? I have set the email address to send the log when the cron job runs but I haven't received anything yet.
* * * * * myjob.sh >> /var/log/myjob.log 2>&1
will log all output from the cron job to /var/log/myjob.log
You might use mail to send emails. Most systems will send unhandled cron job output by email to root or the corresponding user.
By default cron logs to /var/log/syslog so you can see cron related entries by using:
grep CRON /var/log/syslog
https://askubuntu.com/questions/56683/where-is-the-cron-crontab-log
There are at least three different types of logging:
The logging BEFORE the program is executed, which only logs IF the
cronjob TRIED to execute the command. That one is located in
/var/log/syslog, as already mentioned by #Matthew Lock.
The logging of errors AFTER the program tried to execute, which can be sent to
an email or to a file, as mentioned by #Spliffster. I prefer logging
to a file, because with email THEN you have a NEW source of
problems, and its checking if email sending and reception is working
perfectly. Sometimes it is, sometimes it's not. For example, in a
simple common desktop machine in which you are not interested in
configuring an smtp, sometimes you will prefer logging to a file:
* * * * COMMAND_ABSOLUTE_PATH > /ABSOLUTE_PATH_TO_LOG 2>&1
I would also consider checking the permissions of /ABSOLUTE_PATH_TO_LOG, and run the command from that user's permissions. Just for verification, while you test whether it might be a potential source of problems.
The logging of the program itself, with its own error-handling and logging for tracking purposes.
There are some common sources of problems with cronjobs:
* The ABSOLUTE PATH of the binary to be executed. When you run it from your
shell, it might work, but the cron process seems to use another
environment, and hence it doesn't always find binaries if you don't
use the absolute path.
* The LIBRARIES used by a binary. It's more or less the same previous point, but make sure that, if simply putting the NAME of the command, is referring to exactly the binary which uses the very same library, or better, check if the binary you are referring with the absolute path is the very same you refer when you use the console directly. The binaries can be found using the locate command, for example:
$locate python
Be sure that the binary you will refer, is the very same the binary you are calling in your shell, or simply test again in your shell using the absolute path that you plan to put in the cronjob.
Another common source of problems is the syntax in the cronjob. Remember that there are special characters you can use for lists (commas), to define ranges (dashes -), to define increment of ranges (slashes), etc. Take a look:
http://www.softpanorama.org/Utilities/cron.shtml
Here is my code:
* * * * * your_script_fullpath >> your_log_path 2>&1
On Ubuntu you can enable a cron.log file to contain just the CRON entries.
Uncomment the line that mentions cron in /etc/rsyslog.d/50-default.conf file:
# Default rules for rsyslog.
#
# For more information see rsyslog.conf(5) and /etc/rsyslog.conf
#
# First some standard log files. Log by facility.
#
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
Save and close the file and then restart the rsyslog service:
sudo systemctl restart rsyslog
You can now see cron log entries in its own file:
sudo tail -f /var/log/cron.log
Sample outputs:
Jul 18 07:05:01 machine-host-name CRON[13638]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
However, you will not see more information about what scripts were actually run inside /etc/cron.daily or /etc/cron.hourly, unless those scripts direct output to the cron.log (or perhaps to some other log file).
If you want to verify if a crontab is running and not have to search for it in cron.log or syslog, create a crontab that redirects output to a log file of your choice - something like:
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
30 2 * * 1 /usr/local/sbin/certbot-auto renew >> /var/log/le-renew.log 2>&1
Steps taken from: https://www.cyberciti.biz/faq/howto-create-cron-log-file-to-log-crontab-logs-in-ubuntu-linux/
cron already sends the standard output and standard error of every job it runs by mail to the owner of the cron job.
You can use MAILTO=recipient in the crontab file to have the emails sent to a different account.
For this to work, you need to have mail working properly. Delivering to a local mailbox is usually not a problem (in fact, chances are ls -l "$MAIL" will reveal that you have already been receiving some) but getting it off the box and out onto the internet requires the MTA (Postfix, Sendmail, what have you) to be properly configured to connect to the world.
If there is no output, no email will be generated.
A common arrangement is to redirect output to a file, in which case of course the cron daemon won't see the job return any output. A variant is to redirect standard output to a file (or write the script so it never prints anything - perhaps it stores results in a database instead, or performs maintenance tasks which simply don't output anything?) and only receive an email if there is an error message.
To redirect both output streams, the syntax is
42 17 * * * script >>stdout.log 2>>stderr.log
Notice how we append (double >>) instead of overwrite, so that any previous job's output is not replaced by the next one's.
As suggested in many answers here, you can have both output streams be sent to a single file; replace the second redirection with 2>&1 to say "standard error should go wherever standard output is going". (But I don't particularly endorse this practice. It mainly makes sense if you don't really expect anything on standard output, but may have overlooked something, perhaps coming from an external tool which is called from your script.)
cron jobs run in your home directory, so any relative file names should be relative to that. If you want to write outside of your home directory, you obviously need to separately make sure you have write access to that destination file.
A common antipattern is to redirect everything to /dev/null (and then ask Stack Overflow to help you figure out what went wrong when something is not working; but we can't see the lost output, either!)
From within your script, make sure to keep regular output (actual results, ideally in machine-readable form) and diagnostics (usually formatted for a human reader) separate. In a shell script,
echo "$results" # regular results go to stdout
echo "$0: something went wrong" >&2
Some platforms (and e.g. GNU Awk) allow you to use the file name /dev/stderr for error messages, but this is not properly portable; in Perl, warn and die print to standard error; in Python, write to sys.stderr, or use logging; in Ruby, try $stderr.puts. Notice also how error messages should include the name of the script which produced the diagnostic message.
Use the command crontab -e, and then edit the cron jobs as
* * * * * /path/file.sh > /pathToKeepLogs/logFileName.log 2>&1
Here, 2>&1 indicates that the standard error (2>) is redirected to the same file descriptor that is pointed by standard output (&1).
If you'd still like to check your cron jobs you should provide a valid
email account when setting the Cron jobs in cPanel.
When you specify a valid email you will receive the output of the cron job that is executed. Thus you will be able to check it and make sure everything has been executed correctly. Note that you will not receive an email if there is no output from the cron job command.
Please bear in mind that you will receive an email for each of the executed cron jobs. This may flood your inbox in case your crons run too often
Incase you're running some command with sudo, it won't allow it. Sudo needs a tty.
I need to start up a Golang web server and leave it running in the background from a bash script. If the script in question in syntactically correct (as it will be most of the time) this is simply a matter of issuing a
go run /path/to/index.go &
However, I have to allow for the possibility that index.go is somehow erroneous. I should explain that in Golang this for something as "trival" as importing a module that you then fail to use. In this case the go run /path/to/index.go bit will return an error message. In the terminal this would be something along the lines of
index.go:4:10: expected...
What I need to be able to do is to somehow change that command above so I can funnel any error messages into a file for examination at a later stage. I tried variants on go run /path/to/index.go >> errors.txt with the terminating & in different positions but to no avail.
I suspect that there is a bash way to do this by altering the priority of evaluation of the command via some judiciously used braces/brackets etc. However, that is way beyond my bash capabilities. I would be most obliged to anyone who might be able to help.
Update
A few minutes later... After a few more experiments I have found that this works
go run /path/to/index.go &> errors.txt &
Quite apart from the fact that I don't in fact understand why it works there remains the issue that it produces a 0 byte errors.txt file when the command goes to completion without Golang throwing up any error messages. Can someone shed light on what is going on and how it might be improved?
Taken from man bash.
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
Appending Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be appended to the file whose name is the expansion of word.
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
Narūnas K's answer covers why the &> redirection works.
The reason why the file is created anyway is because the shell creates the file before it even runs the command in question.
You can see this by trying no-such-command > file.out and seeing that even though the shell errors because no-such-command doesn't exist the file gets created (using &> on that test will get the shell's error in the file).
This is why you can't do things like sed 'pattern' file > file to edit a file in place.
Is there a way to add an entry to OS X's /var/log/install.log file from within a shell script?
Optimally the method wouldn't require root access as I don't think I'll have it.
The problem I'm having is I'm executing a shell script as part of an installation-check (p15 of Apple's Distribution Definition XML Schema) step from within an OS X installer package via the Javascript System.run() command (p30 of Apple's Installer Javascript Reference), but I can't see any output from that shell script.
I know the shell script is executing, because when I use the "logger" command from within the script, my log text appears inside /var/log/system.log. But in order to get a complete picture of what's going on, I'd need to merge it by hand with /var/log/install.log, which is where the general output of the installer, and any Javascript logging I do, ends up.
Any help would be appreciated. I've tried using the "logger" command's -f flag to use /var/log/install.log, e.g.
logger -f /var/log/install.log sometext
...but no dice; sometext still gets added to /var/log/system.log.
Read up on bash scripting.
You can add a line to a file like this
echo "My line here" >> /var/log/system.log
If it gives a Permission denied error, you need root access.
OK. Long time passed, and I found out the following.
In normal scenarios, anything written by pre and post install scripts (mine are python and bash) to stdout will be logged by the installer daemon to the /var/log/install.log. I experimented various tools to create my installer packages, and they usually did this.
However, in my own deployment installer, for some reason, only things written to stderr get logged to the /var/log/install.log - so you might want to try that too.
A little late, but just had the same problem and was able to add logs to install.log from AppleScript using logger with the LOG_INSTALL facility:
logger -p 'install.error' "My error message"
That's not an answer per se, but maybe a hint? Installer man pages mention a "LOG_INSTALL facility", whose output is the desired /var/log/install.log
But what is this "facility" and where is it available - I can't find. I really need to write my pre/post script failures and specific scenarios to that log.
I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck
I am a newbie to unix scripting, I want to do following and I have little clue how to proceed.
I want to log the input and output of certain set of commands, given on the terminal, to a trace file. I should be able to switch it on and off.
E.g.
switch trace on
user:echo Hello World
user:Hello World
switch trace off
Then the trace log file, e.g. trace.log, it's content should be
echo Hello World
Hello World
One thing that I can think to do is to use set -x, redirecting its output to some file, but couldn't find a way to do that. I did man set, or man -x but I found no entry. Maybe I am being too naive, but some guidance will be very helpful.
I am using bash shell.
See script(1), "make typescript of terminal session". To start a new transcript in file xyz: script xyz. To add on to an existing transcript in file xyz: script -a xyz.
There will be a few overhead lines, like Script started on ... and Script done on ... which you could use awk or sed to filter out on printout. The -t switch allows a realtime playback.
I think there might have been a recent question regarding how to display a transcript in less, and although I can't find it, this question and this one address some of the same issues of viewing a file that contains control characters. (Captured transcripts often contain ANSI control sequences and usually contain Returns as well as Linefeeds.)
Update 1 A Perl program script-declutter is available to remove special characters from script logs.
The program is about 45 lines of code found near the middle of the link. Save those lines of code in a file called script-declutter, in a subdirectory that's on your PATH (for example, $HOME/bin if that's on your search path, else (eg) /usr/local/bin) and make the file executable. After that, a command like
script-declutter typescript > out
will remove most special characters from file typescript,
while directing the result to file out.