Can severity be set conditionally for imfile input in rsyslog 8.32? - rsyslog

I have a text logfile from an application that is formatted like so...
2019-12-18 12:32:00 DEBUG This is a debugging line
2019-12-18 12:32:15 This is a informational line
2019-12-18 12:32:17 WARNING This is a warning line
2019-12-18 12:32:33 ERROR This is an error line
2019-12-18 12:33:44 ERROR This is a multi-line error message
This is more of the previous error message
2019-12-18 12:34:15 This is back to another informational line
I have configured rsyslog to ingest this file using the imfile module and ship it off to my central syslog server...
module(load="imfile")
input(type="imfile"
file="/usr/share/myapplication/myapplication.log"
tag="myapplication-log:"
facility="local4"
severity="info"
startmsg.regex="^[0-9]{4}-[0-9]{1,2}-[0-9]{1,2} [0-9]{1,2}:[0-9]{2}:[0-9]{2} "
readTimeout="5"
)
*.* #192.168.1.4
So far so good. The remote syslog server receives the lines correctly and handles the multi-line error as a single message. Almost perfect, but now I want to expand things a bit.
All syslog messages in the above are sent as local4.info as expected. The original message lines contain enough information for me to be able to correctly identify the proper 'severity' level for the message and I'd like to be able to do that, but I can't seem to figure out the method.
Something to the effect of this non-working pseudo-code...
if $programname == "myapplication-log" then {
if ($msg contains " DEBUG ") then severity debug;
if ($msg contains " WARNING ") then severity warn;
if ($msg contains " ERROR ") then severity error;
}
Any help appreciated. Thanks.
-- EDIT for clarity --
As #meuh pointed out, this could be accomplished at the output phase using templating, but my preference would be to have the severity correctly determined during the input phase. This way, any outputs of this message log are handled exactly the same as any other log and I don't have to remember to perform special output handling if my outputs change a year from now.
A better pseudo-code example of what I am looking for would be...
input(type="imfile"
file="/usr/share/myapplication/myapplication.log"
tag="myapplication-log:"
facility="local4"
severity="info"
severity="debug" if ($msg contains " DEBUG ");
severity="warn" if ($msg contains " WARNING ");
severity="error" if ($msg contains " ERROR ");
startmsg.regex="^[0-9]{4}-[0-9]{1,2}-[0-9]{1,2} [0-9]{1,2}:[0-9]{2}:[0-9]{2} "
readTimeout="5"
)

Once the input has been parsed, you enter a second phase where you have access to the properties and can create a template variation on the usual network output. The <prio> part of the output is made up from
prio = facility*8 + severity
where rfc5424 lists the numbers for each facility and severity: Local4 is 20, severity debug is 7, warning 4, error 3.
The standard RSYSLOG_ForwardFormat used to send messages is defined as
template(name="RSYSLOG_ForwardFormat" type="string"
string="<%PRI%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag:1:32%
%msg:::sp-if-no-1st-sp%%msg%")
You want to recalculate the %PRI% property. You cannot change the property but you can have your own local variables, eg $.myprio and use that. The result is:
template(name="myformat" type="string"
string="<%$.myprio%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag:1:32% %msg:::sp-if-no-1st-sp%%msg%")
if $programname == "myapplication-log" then {
set $.myseverity = 6;
if ($msg contains " DEBUG ") then set $.myseverity = 7;
if ($msg contains " WARNING ") then set $.myseverity = 4;
if ($msg contains " ERROR ") then set $.myseverity = 3;
set $.myprio = 20*8+$.myseverity;
*.* #192.168.1.4;myformat
}
For alternatives, look through the rsyslog modules for input, parsing, message modification and output. Possibilities are
input module improg that can run a program and accept input piped from it,
the parser pmnormalize using liblognorm which can parse data according to your rules, and
modification module mmnormalize using the same liblognorm which can modify the message.
The lognorm parser is probably the definitive solution, but is fairly complex and I cannot provide further advice in using it.
However, improg is a simple way to externalize the pre-processing into a separate program written in any suitable language, or even a shell script, using inotify to tail the input file and munge the lines before passing them on to rsyslog.

Related

How to debug `Error while processing function` in `vim` and `nvim`?

TL;DR
How to find where exactly vim or nvim error started (which file?) when I'm interested in fixing the actual issue and not just removing the bad plugin? Anything better than strace and guesswork to find the error origin?
Issue
I often add a plugin to my vim or nvim config and end up getting errors on hooks (buffer open, close, write):
"test.py" [New] 0L, 0C written
Error detected while processing function 343[12]..272:
line 8:
E716: Key not present in Dictionary: _exec
E116: Invalid arguments for function get(a:args, 'exec', a:1['_exec'])
E15: Invalid expression: get(a:args, 'exec', a:1['_exec'])
The problem is, I have no idea where those come from, only get some line number of unknown file and I know it's not my vim/nvim config file.
Somewhere, you have a plugin that has defined a dictionary with anonymous-functions (check the help related to this tag).
For the curious ones, it's done this way:
let d = {}
function! d.whatever() abort
throw "blah"
endfunction
When you execute this function, you'll get the kind of error you're currently observing. That's why I stopped working this way to prefer:
let d = {}
function s:whatever() abort
throw "blah"
endfunction
let d.whatever = function('s:whatever') " a workaround is required for older versions of vim
" At least this way I'll get a `<SNR>42_whatever` in the exception throwpoint, and thus a scriptname.
That's the why. Now, back to your problem, AFAIK, the only things you'll be able to know are the two functions that have been called:
in line 12 of :function {343}, you've called
:function {272} which contains an error at line 8.
Thanks to these two commands (may be prefixed with :verbose, I don't remember exactly), you'll get the source code of the two functions, which you should be able to use in order to grep your plugins to know where it appears.

HWUT - selectively printing from read buffer into .exe file in OUT folder

I am receiving data from serial port. I use HWUT for comparing my test results. The content from receive buffer cannot be directly used for comparison of GOOD and OUT result. Becuase the OUT will always have unnecessary command prompts, enters and other stuff. I am looking to select what must be written from read buffer into OUT file. For example below is an example
←[36m
A> target cmd
←[36m
{t=3883.744541 s} Received data
A> result : 1
bytes read 518Closing serial port...OK
And I would like the out file to only have 'result : 1'.
When i checked the code, messages.py seems to be printing to std out. But not sure if that is being used for printing into OUT file. How can this be achieved?
Anything that you print to 'stdout' should appear in the "OUT/*" files. If it does not, then this would have nothing to do with receiption via serial line(s). Here is what I would do to analyze:
In your connector application there must be something like
receive_n = receive(.., &buffer[0], Size);
buffer[receive_n] = '\0'; /* terminating zero */
printf("%s", &buffer[0]);
If this is so, then
Write in paralell into a log file.
static log_fh = fopen("tmp.log", "wb");
...
printf("%s", &buffer[0]);
fwrite((void*)buffer, 1, received_n, log_fh);
Compare 'tmp.log' with the file in OUT.
If there is a difference, HWUT is to blame.
Check the output before you write it.
if( my_condition(buffer, received_n) ) printf("%s", &buffer[0]);
HWUT has an internal infrastructure to post-process test output, but it is not documented and therefore not reliable--at the time of this writing.
Edit the file "hwut-info.dat" in your TEST directory.
These R my Tests on Something Important (Title)
-------------------------------------------------------
--not *.exe
bash execute-this.sh
-------------------------------------------------------
The --not *.exe makes sure that HWUT will not execute the *.exe files which you compiled. The bash execute-this.sh line lets HWUT consider the file execute-this.sh as a test application and call it with 'bash'.
Inside the execute-this.sh you might want to make your application, execute it and filter the output, i.e.
#! bash
make my-test.exe
./my-test.exe | awk ' /^A>/ '
which will print only those lines which start with 'A>'. grep and awk are your friends, here. You might want to familiarize yourself with these two.
Alternatively, you may filter directly in your connection application.

Trying to make a bug log in a ruby program. How can I write to a file without over-writing the data?

Trying to make a bug log into a Ruby program so that when I come across bugs I can run the program and it will automatically write the bugs to a text file. I was able to get everything to write to a file but every time that I enter a new bug in it just overwrites the file and only can hold one entry at a time.
Here is my code thus far:
print "What is the error message? "
msg = "Error message: " + gets.chomp
print "What does the error mean? "
mean = "Error meaning: "+gets.comp
print "What resolved the error? "
resolved = "Error resolution: " + gets.comp
File.open('Bug_Log.txt', 'w') do |write|
write.puts msg
write.puts mean
write.puts resolved
end
This is happening because you're opening the file in 'w' mode, which overwrites the file, instead of 'a' ("append") mode, which will append to what's already in the file.
Try changing this line:
File.open('Bug_Log.txt', 'w') do |write|
to this:
File.open('Bug_Log.txt', 'a') do |write|

Can I fully customise an Xcode 4 Run Script Build Phase error/warning within the Issues Navigator and Build Logs?

I read on a blog somewhere that you can integrate your own build scripts with Xcode's Issues Navigator and Build Logs GUIs by printing messages to STDOUT using the following format:
FILENAME:LINE_NUMBER: WARNING_OR_ERROR: MSG
(Where WARNING_OR_ERROR is either warning or error)
e.g.
/path/to/proj/folder/somefile.ext:10: warning: There was a problem processing the file
Will show a Warning at line 10 of somefile.ext which reads "There was a problem processing the file". This does actually work (which is fantastic).
Is there any official documentation of this feature (I couldn't find any)?
In the Issues Navigator, I get a warning for the file somefile.ext, but the issue's title is "Shell Script Invocation Error" (my message appears underneath the title). Is there some way to set that heading, or am I stuck with that generic (and ugly) "Shell Script Invocation Error"?
It doesn't really answer your question as to whether you can customise the "Shell Script Invocation Error", but perl code doesn't get the nice error messages you describe, however if you include this perl module (or just the code from it) in your perl script, it does generate the nice error messages you talk about (still the same "Shell Script Invocation Error" title you mention). Just thought I'd share it for anyone using a perl script in Xcode and getting really lousy errors.
package XcodeErrors;
use strict;
use warnings;
$SIG{__WARN__} = sub
{
my #loc = caller(0);
print STDERR "$loc[1]:$loc[2]: warning: ", #_, "\n";
return 1;
};
$SIG{__DIE__} = sub
{
my #loc = caller(0);
print STDERR "$loc[1]:$loc[2]: error: ", #_, "\n";
exit 1;
};
1;
exit with 0 in your customized shell script will turn off "Shell Script Invocation Error"

How to get R script line numbers at error?

If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #

Resources