When rsyslog (v8.39.0) mmnormalize is not recognized. How can this be fixed? - rsyslog

I am trying to use mmnormalize to structure text logs. Both with rsyslog 8.16.x and with 8.39.0 when trying to use mmnormalize as an action logs show that the module was not recognized. Below are details.
How can I set up to use mmnormalize with rsyslog?
remediation already tried
Installed separately liblognorm, libstr, json-c,libfastjson
Upgraded rsyslog from 8.16.x to 8.39.0
rsyslog .conf
module(load="mmnormalize") # text parsing
syslog log messages
Dec 3 11:33:55 sys1 systemd[1]: Starting System Logging Service...
Dec 3 11:33:55 sys1 systemd[1]: Started System Logging Service.
Dec 3 11:33:55 sys1 rsyslogd: could not load module 'mmnormalize',
errors: trying to load module /usr/lib/rsyslog/mmnormalize.so:
/usr/lib/rsyslog/mmnormalize.so: cannot open shared object file: No
such file or directory [v8.39.0 try http://www.rsyslog.com/e/2066 ]
Dec 3 11:33:55 sys1 rsyslogd: module name 'mmnormalize' is unknown
[v8.39.0 try http://www.rsyslog.com/e/2209 ]
Dec 3 11:33:55 sys1 rsyslogd: error during parsing file
/etc/rsyslog.d/52-tomcat.conf, on or before line 52: errors occured in
file '/etc/rsyslog.d/52-tomcat.conf' around line 52 [v8.39.0 try
http://www.rsyslog.com/e/2207 ]

The following way to install mmnormalize worked for me. I was running this on Ububutu (Xenial)
sudo apt-get install rsyslog-mmnormalize
Rich Megginson (thank you) answered as below to the same question I posted to rsyslog mailing list (rsyslog#lists.adiscon.com). As mentioned above it worked for me.
"On RHEL/CentOS/Fedora and similar platforms, the rsyslog-mmnormalize is a separate RPM that must be installed separately e.g.
yum install rsyslog rsyslog-mmnormalize ....
"

Related

Datadog agent do not send data

I run into an issue with my Datadog agent. I installed Agent version 7.35.0 on an EC2 ubuntu machine. After I restarted the agent I got this error:
Apr 10 11:24:24 ip-10-100-0-33 agent[9951]: 2022-04-10 11:24:24 UTC | CORE | WARN |(pkg/collector/python/datadog_agent.go:124 in LogMessage) | disk:e5dffb8bef24336f |(disk.py:136) | Unable to get disk metrics for /sys/kernel/debug/tracing: [Errno 13] Permission denied: '/sys/kernel/debug/tracing'. You can exclude this mountpoint in the settings if it is invalid.
From what I've seen on threads, they gave this answer:
Can you add "tracefs" to the "file_system_blacklist" configuration to see if that unblocks you? We can add it by default if it does.
But I do not completely understand this answer, and I am not sure what should I change to fix this issue.
If anyone experiences this kind of thing and can help me it would be super helpful
Thank you!
With Datadog Agent 7:
mv /etc/datadog-agent/conf.d/disk.d/conf.yaml.default /etc/datadog-agent/conf.d/disk.d/conf.yaml
Then in /etc/datadog-agent/conf.d/disk.d/conf.yaml, uncomment file_system_global_exclude and underneath it add - tracefs:
init_config:
file_system_global_exclude:
- tracefs

rsyslog 8.34.0: could not load module '/usr/lib/rsyslog/omuxsock.so'

My project requires forwarding log using rsyslog to a socket. rsyslog provides omuxsock output module for the same. When I try to use it using standard example, I see below error.
rsyslogd: could not load module '/usr/lib/rsyslog/omuxsock.so', dlopen: Error loading shared library /usr/lib/rsyslog/omuxsock.so: No such file or directory [v8.34.0 try http://www.rsyslog.com/e/2066 ]
Could someone please help me in solving this issue?
System Info
Alpine container = v3.8
rsyslog = 8.34.0-r0
Full log :-
/ # rsyslogd -N6 | head -10
rsyslogd: version 8.34.0, config validation run (level 6), master config /etc/rsyslog.conf
rsyslogd: could not load module '/usr/lib/rsyslog/omuxsock.so', dlopen: Error loading shared library /usr/lib/rsyslog/omuxsock.so: No such file or directory [v8.34.0 try http://www.rsyslog.com/e/2066 ]
rsyslogd: invalid or yet-unknown config file command 'OMUxSockSocket' - have you forgotten to load a module? [v8.34.0 try http://www.rsyslog.com/e/3003 ]
rsyslogd: error during parsing file /etc/rsyslog.conf, on or before line 30: errors occured in file '/etc/rsyslog.conf' around line 30 [v8.34.0 try http://www.rsyslog.com/e/2207 ]
rsyslogd: error during parsing file /etc/rsyslog.d/rsyslog_stats.conf, on or before line 15: invalid character '\' in expression - is there an invalid escape sequence somewhere? [v8.34.0 try http://www.rsyslog.com/e/2207 ]

sctp_core_destroy(): SCTP API not initialized in kamailio start

Hi I have installed Kamalio it start first time but when I stop and start it again it gives sctp_core_destroy(): SCTP API not initialized . I have already installed sctp module.
yyerror_at(): parse error in config file /etc/kamailio/kamailio.cfg
load_module(): could not find module <db_mysql> in </usr/lib/kamailio/modules>
[sctp_core.c:53]: sctp_core_destroy(): SCTP API not initialized
From the log it is obvious that you have successfully compiled & installed SCTP module, however it could NOT be initialized.
Note that is error could must often than not be as a result of other errors in your cfg file.
Few tips:
Can you run kamailio -c and to be sure there is NO error in your cfg.
Found error? use this command to monitor what the exact issue is. Run from a different terminal tail -fn200 /var/log/syslog
On the second terminal try restarting you Kamalio server sudo service kamalio restart
Revisit terminal 1 and look out for the first line with CRITICAL output like the one below CRITICAL: <core> [core/cfg.y:3413]: yyerror_at(): parse error in config file /usr/local/etc/kamailio/kamailio.cfg, line 366, column 41: syntax error
Line 366 mostly is the issue so visit that file at that line (366) to fix the proble
sudo nano +366 /usr/local/etc/kamailio/kamailio.cfg
Let me know if it helps

File files/default/plugins/README does not exist for cookbook ohai

I'm running a provisioning setup using vagrant and chef-solo to install the gitlab cookbook (however, this problem does not seem to be specific to the gitlab cookbook).
The run-list is simple:
{
"run_list": [
"recipe[ohai::default]"
]
}
The chef-solo run (chef-solo -c solo.rb -j dna.json) results in the following error:
Error executing action create on resource 'cookbook_file[/etc/chef/ohai_plugins/README]'
The file IS there:
# ls -l /tmp/vagrant-chef/e939c8a8cabcf9cdd72f5d7c3a98d728/cookbooks/ohai-2.0.1/files/default/plugins/README
-rwxrwxrwx. 1 vagrant vagrant 49 Oct 21 13:16 /tmp/vagrant-chef/e939c8a8cabcf9cdd72f5d7c3a98d728/cookbooks/ohai-2.0.1/files/default/plugins/README
When I check the process with 'strace' it looks like I can see the error source:
10444 0.000210 stat("/tmp/vagrant-chef/cookbooks/cookbooks/ohai/files/default/plugins/README", 0x7ffff0570bb0) = -1 ENOENT (No such file or directory)
If you notice, the cookbook is named 'ohai-2.0.1' however the process is trying to access the cookbook 'ohai' (i.e., without the version number).
Has anyone else encountered this before? I've seen one other post related to the issue that suggested putting ohai::default first in the run-list, which I've done (see dna.json above)
This happened to me while using Berkshelf. It names all of the cookbooks name-version instead of just name. To solve this, I did berks vendor and added that directory to my cookbooks path. Now everything works!

karaf 3.0.1 not starting up

I'm trying to start karaf 3.0.1 in a solaris box (without internet) but getting the following error:
karaf: Ignoring predefined value for KARAF_HOME
Could not resolve mvn:org.eclipse/org.eclipse.osgi/3.8.2.v20130124-134944
and in karaf.log:
Jun 30, 2014 12:21:09 PM org.apache.karaf.main.Main main
SEVERE: Could not launch framework
java.lang.RuntimeException: Could not resolve mvn:org.eclipse/org.eclipse.osgi/3.8.2.v20130124-134944
at org.apache.karaf.main.util.SimpleMavenResolver.resolve(SimpleMavenResolver.java:59)
at org.apache.karaf.main.Main.createClassLoader(Main.java:315)
at org.apache.karaf.main.Main.launch(Main.java:234)
at org.apache.karaf.main.Main.main(Main.java:171)
the bundles are well in place (system folder) and the org.ops4j.pax.url.mvn.cfg file states:
org.ops4j.pax.url.mvn.repositories=\
file:${karaf.home}/${karaf.default.repository}#id=system.repository, \
file:${karaf.data}/kar#id=kar.repository#multi\
http://repo1.maven.org/maven2#id=central,\
http://repository.springsource.com/maven/bundles/release#id=spring.ebr.release,\
http://repository.springsource.com/maven/bundles/external#id=spring.ebr.external
I've tried running the framework using the three methods (server, service, client) but nothing seems to be working.
my environment is:
KARAF_BASE=/export/home/mehdi/bin/karaf
KARAF_HOME=/export/home/mehdi/bin/karaf
KARAF_ETC=/export/home/mehdi/bin/karaf/etc
KARAF_DATA=/export/home/mehdi/bin/karaf/data
JAVA_HOME=/opt/temp/jre1.7.0_13
I googled a bit and found a workaround which says to add -h 127.0.0.1 to the client script, but still nothing.

Resources