Weird messages "rsyslogd: msg: ruleset ' &è·Æ ' could not be found and could not be assgined to message object" in rsyslog logs - rsyslog

We have an rsyslog configured to receive messages from multiple sources on different ports.
Messages are then assigned to different action rulesets depending on the incoming port.
We have noticed that sometimes (but not systematically), after an rsyslog restart, there are error logged in /var/log/messages with content like
"2022-08-16T16:46:26.841640+02:00 mysyslogserver rsyslogd: msg: ruleset ' 6È B ' could not be found and could not be assgined to message object. This possibly leads to the message being processed incorrectly. We cannot do anything against this, but wanted to let you know. [v8.32.0 try http://www.rsyslog.com/e/3003 ]"
The name of ruleset is changing every time and seems to be a random binary string. Such message is logged several thousands of time (with same ruleset name), at a rate which often exceeds ratelimit for internal messages.
(And of course we don't have rulesets with such names in our config file... )
Would you know what could be the cause of such issue ? Is it a bug ?
Note that in some rulesets we use "call" statement to call sub-rulesets, but we don't use "call_indirectly".
Thanks in advance for any help.
S.Hemelaer

Related

Reference Conf File within Conf File or Apply Rule to All Listening RSyslog Ports

We have a number of individual conf files with their own ruleset that's bound to its unique port. We want to create a single conf file that will filter/drop specific things such as, if msg from IP drop it or if msg contains x drop it. And have the drop filtering apply to all listening ports. Is this possible to do? Should we avoid using rulesets?
We're trying to avoid updating the drop/filter in each conf file for each port every time the filter has a new update.
Would anyone happen to know if one of the following things is possible with RSyslog?
Have 1 conf file that will listen on all rsyslog ports and be processed first? Without specifying each open port.
Have a conf file that calls another file with a rule in it?
Appreciate any help with this.
Typically, the default configuration file, say /etc/rsyslog.conf, will contain a line near the start saying something like
$IncludeConfig /etc/rsyslog.d/*.conf
or the equivalent RainerScript syntax
include(file="/etc/rsyslog.d/*.conf")
If not you can add it.
This will include all files matching the glob pattern, in alphabetical order. So you can optionally put any configuration in that directory, for example in arbitrarily named files 00-some.conf, and 10-somemore.conf and so on.
One file could have lots of input() statements like:
module(load="imtcp" MaxSessions="500")
input(type="imtcp" port="514")
input(type="imtcp" port="10514")
input(type="imtcp" port="20514")
assuming you are expecting to receive incoming tcp connections from remote
clients. See imtcp.
All the data from those remotes will be affected by any following rules.
For example, the last included file in the directory could hold lines like:
if ($msg contains "Password: ") then stop
if ($msg startswith "Debug") then stop
if ($hostname startswith "test") then stop
These will stop further processing of any matching input messages, effectively
deleting them.
The above inputs are all collected into a single global input queue.
All the if rules are applied to all the messages from that queue.
If you want to, you can partition some of the inputs into a new queue,
and write rules that will only apply to that new independent queue. The rest of the
configuration will know nothing about this new queue and rules.
This is called a ruleset. See
here and
here.
For example, you can have a ruleset called "myrules". Move one or more
inputs into the ruleset by adding the extra option:
input(type="imtcp" port="514" ruleset="myrules")
input(type="imtcp" port="10514" ruleset="myrules")
Move the rules to apply to that queue into a ruleset definition:
ruleset(name="myrules"){
if ($msg contains "Password: ") then stop
if ($msg startswith "Debug") then stop
*.* /var/log/mylogfile
}

How to detect sender and destination of a notification in dbus-monitor?

My goal is to filter notifications coming from different applications (mainly from different browser window).
I found that with the help of the dbus-monitor I can write a small script that could filter the notification messages that I am interested in.
The filter script is working well, but I have a small problem:
I am starting with the
dbus-monitor "interface='org.freedesktop.Notifications', destination=':1.40'"
command. I have to added the "destination=':1.40'" because on Ubuntu 20.04 I always got twice the same notification.
The following output of
dbus-monitor --profile "interface='org.freedesktop.Notifications'"
demonstrate the reason:
type timestamp serial sender destination path interface member
# in_reply_to
mc 1612194356.476927 7 :1.227 :1.56 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
mc 1612194356.483161 188 :1.56 :1.40 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
As you can see the sender :1.277 sends to :1.56 first than this will be the sender to :1.40 destination. (Simply notify-send hello test message was sent)
My script is working on that way, but every time system boot up, I have to check the destination number and modify my script accordingly to get worked.
I have two questions:
how to discover the destination string automatically? (:1.40 in the above example)
how to prevent system sending the same message twice? (If this question would be answered, than the question under point 1. became pointless.)

Issues with mkdbfile in a simple "read a file > Create a hashfile job"

Hello datastage savvy people here.
Two days in a row, the same single datastage job failed (not stopping at all).
The job tries to create a hashfile using the command /logiciel/iis9.1/Server/DSEngine/bin/mkdbfile /[path to hashfile]/[name of hashfile] 30 1 4 20 50 80 1628
(last trace in the log)
Something to consider (or maybe not ? ) :
The [name of hashfile] directory exists, and was last modified at the time of execution) but the file D_[name of hashfile] does not.
I am trying to understand what happened to prevent the same incident to happen next run (tonight).
Previous to this day, this job is in production since ever, and we had no issue with it.
Using Datastage 9.1.0.1
Did you check the job log to see if captured an error? When a DataStage job executes any system command via command execution stage or similar methods, the stdout of the called job is captured and then added to a message in the job log. Thus if the mkdbfile command gives any output (success messages, errors, etc) it should be captured and logged. The event may not be flagged as an error in the job log depending on the return code, but the output should be there.
If there is no logged message revealing cause of directory non-create, a couple of things to check are:
-- Was target directory on a disk possibly out of space at that time?
-- Do you have any Antivirus software that scans directories on your system at certain times? If so, it can interfere with I/o. If it does a scan at same time you had the problem, you may wish to update the AV software settings to exclude the directory you were writing dbfile to.

messages lost due to rate-limiting

We are testing the capacity of a Mail relay based on RHEL 7.6.
We are observing issues when sending an important number of msgs (e.g.: ~1000 msgs in 60 seconds).
While we have sent all the msgs and the recipient has received all the msgs, logs are missing in the /var/log/maillog_rfc5424.
We have the following message in the /var/log/messages:
rsyslogd: imjournal: XYZ messages lost due to rate-limiting
We adapted the /etc/rsyslog.conf with the following settings but without effect:
$SystemLogRateLimitInterval 0 # turn off rate limit
$SystemLogRateLimitBurst 0 # turn rate limit off
Any ideas ?
The error is from imjournal, but your configuration settings are for imuxsock.
According to the rsyslog configuration page you need to set
$imjournalRatelimitInterval 0
$imjournalRatelimitBurst 0
Note that for very high message rates you might like to change to imuxsock, as it says:
this module may be notably slower than when using imuxsock. The journal provides imuxsock with a copy of all “classical” syslog messages, however, it does not provide structured data. Only if that structured data is needed, imjournal must be used. Otherwise, imjournal may simply be replaced by imuxsock, and we highly suggest doing so.

Ruby IMAP "changes" since last check

I'm working on an IMAP client using Ruby and Rails. I can successfully import messages, mailboxes, and more... However, after the initial import, how can I detect any changes that have occurred since my last sync?
Currently I am storing the UIDs and UID validity values in the database, comparing them, and searching appropriately. This works, but it doesn't detect deleted messages or changes to message flags, etc.
Do I have to pull all messages every time to detect these changes? How do other IMAP clients do it so quickly (i.e. Apple Mail and Postbox). My script is already taking 10+ seconds per account with very few email addresses:
# select ourself as the current mailbox
#imap_connection.examine(self.location)
# grab all new messages and update them in the database
# if the uid's are still valid, we will just fetch the newest UIDs
# otherwise, we need to search when we last synced, which is slower :(
if self.uid_validity.nil? || uid_validity == self.uid_validity
# for some IMAP servers, if a mailbox is empty, a uid_fetch will fail, so then
begin
messages = #imap_connection.uid_fetch(uid_range, ['UID', 'RFC822', 'FLAGS'])
rescue
# gmail cries if the folder is empty
uids = #imap_connection.uid_search(['ALL'])
messages = #imap_connection.uid_fetch(uids, ['UID', 'RFC822', 'FLAGS']) unless uids.empty?
end
messages.each do |imap_message|
Message.create_from_imap!(imap_message, self.id)
end unless messages.nil?
else
query = self.last_synced.nil? ? ['All'] : ['SINCE', Net::IMAP.format_datetime(self.last_synced)]
#imap_connection.search(query).each do |message_id|
imap_message = #imap_connection.fetch(message_id, ['RFC822', 'FLAGS', 'UID'])[0]
# don't mark the messages as read
##imap_connection.store(message_id, '-FLAGS', [:Seen])
Message.create_from_imap!(imap_message, self.id)
end
end
# now assume all UIDs are valid
self.uid_validity = uid_validity
# now remember that we just fetched all those messages
self.last_synced = Time.now
self.save!
There is an IMAP extension for Quick Flag Changes Resynchronization (RFC-4551). With this extension it is possible to search for all messages that have been changed since the last synchronization (based on some kind of timestamp). However, as far as I know this extension is not widely supported.
There is an informational RFC that describes how IMAP clients should do synchronization (RFC-4549, section 4.3). The text recommends issuing the following two commands:
tag1 UID FETCH <lastseenuid+1>:* <descriptors>
tag2 UID FETCH 1:<lastseenuid> FLAGS
The first command is used to fetch the required information for all unknown mails (without knowing how many mails there are). The second command is used to synchronize the flags for the already seen mails.
AFAIK this method is widely used. Therefore, many IMAP servers contain optimizations in order to provide this information quickly. Typically, the network bandwidth is the limiting factor.
The IMAP protocol is brain dead this way, unfortunately. IDLE really should be able to return this kind of stuff while connected, for example. The FETCH FLAGS suggestion above is the only way to do it.
One thing to be careful of, however, is that UIDs are only valid for a given session per the spec. You should not store them, even if some servers persist them.

Resources