Modify rsyslog output when relaying - rsyslog

I am using a server with rsyslog to send logs to Loggly (action(type="omfwd" ) from a variety of network devices.
Unfortunately some devices are not showing up correctly - my switch with hostname switch1950a is seen as host "2019" on Loggly. I want to add a few lines prior to forwarding to modify this hostname.
syslog:
appName:switch1950a
facility:local use 7
host:2019
priority:187
severity:Error
timestamp:2019-05-21T10:39:36+02:00
This is a new install. I have ensured $Preservefqdn on is in the config.
Rsyslog config file:
template(name="LogglyFormat" type="string"
string="<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [XXXXXXX#4XXXX tag=\"RsyslogTLS\"] %msg%\n"
)
action(type="omfwd" protocol="tcp" target="logs-01.loggly.com" port="6514" template="LogglyFormat" StreamDriver="gtls" StreamDriverMode="1" StreamDriverAuthMode="x509/name" StreamDriverPermittedPeers="*.loggly.com")
The received output should be
syslog:
appName:switch1950a
facility:local use 7
host:switch1950a
priority:187
severity:Error
timestamp:2019-05-21T10:39:36+02:00

Related

Mosquitto Broker - Can still sign in with no credentials after allow_anonymous set to false

Following Steve Copes "How to install the Mosquitto Broker on Windows", I created a password.txt file and encrypted it using the mosquiito_passwd utility.
Then I edited the mosquitto.conf file by uncommenting allow_anonymous and setting to false, and uncommenting password_file and setting the path of my password.txt file (In the same folder as mosquitto.conf)
Using MQTT Explorer I am able to log into the broker using the credentials in my password.txt file, but I am also able to still log in leaving user and password blank.
I've seen similar questions being asked here, but I can't find any solutions that have worked, please point me in the right direction. I'am using mosquitto 2.0.14 x64 on Windows 10
Edit:
Only edit done to mosquitto.conf is uncommenting the lines as follows:
# acl_file
allow_anonymous false
# allow_zero_length_clientid
# auto_id_prefix
password_file C:\Users\'MyName'\mosquitto\password.txt
# plugin
# plugin_opt_*
# psk_file
Solution Found:
Adding 'listener 1883' before allow_anonymous false has got it working although I am unsure why that makes a difference.
Config file as follows:
# acl_file
listener 1883
allow_anonymous false
# allow_zero_length_clientid
# auto_id_prefix
password_file C:\Users\'MyName'\mosquitto\password.txt
# plugin
# plugin_opt_*
# psk_file

How to monetdb log settings (merovingian.log)

How to MonetDB log settings.
I want to change the log level of "merovingian.log".
I want to output the ERROR and WARN the merovingian.log.
But now merovingian.log is outputting only MSG log.
2016-07-22 18:12:03 MSG merovingian[7825]: proxying client x.x.x.x:51609 for database 'test' to mapi:monetdb:///var/MonetDB/dbfarm/test/.mapi.sock?database=test
2016-07-22 18:12:03 MSG merovingian[7825]: target connection is on local UNIX domain socket, passing on filedescriptor instead of proxying
OS is "CentOS 6.4",
MonetDB version is "MonetDB-11.19.7".
Any advice how to solve this problem?
No fine-tuning is provided in where log information ends up.

Ruby ODBC with remote database

I am working on an application that connects to a legacy database, Eloquence, through ODBC and SQL/R. I set up my server with UnixODBC and setup the drivers and datasources as follows:
File /etc/odbcinst.ini
[SQLR]
Description=SQLR for Elqouence
Driver=/opt/sqlr/lib/libsqlrodbc.so
Driver64=/opt/sqlr/lib64/libsqlrodbc64.so
FileUsage = 1
File /etc/odbc.ini
[reservations]
Description = SQLR datasource for RES database
Driver = SQLR
Database = res
Servername = eloq-dev
Port = 8003
UserName = sqlrodbc
I confirmed that I can connect to the datasource by running isql reservations and I ran a couple of queries to make sure. No issues. Then I connected my Ruby code up to the database using the ODBC gem and the following code:
require 'rdbi-driver-odbc'
RDBI.connect :ODBC, db: "reservations"
Which outputs the following error:
Unable to connect to host.
Host 127.0.0.1, Service sqlrodbc
errno 111: Connection refused
ODBC::Error: 08001 (3047) [unixODBC][Marxmeier][SQL/R ODBC Client]connection failure
I'm concerned that it's using 127.0.0.1 as the host even though the eloq-dev hostname is set in file /etc/hosts to a different address. I'm also concerned that isql works, but the ODBC gem doesn't.
Additionally, when I use the tcpdump command, the only output related to my connection is this:
tcpdump -i lo
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
18:38:39.688264 IP localhost.50447 > localhost.mcreport: Flags [S], seq 3355035364, win 43690, options [mss 65495,sackOK,TS val 1655798115 ecr 0,nop,wscale 7], length 0
18:38:39.688280 IP localhost.mcreport > localhost.50447: Flags [R.], seq 0, ack 3355035365, win 0, length 0
No packets are going out over the network at all.
I've also changed my code to use RDBI instead of Ruby-ODBC, but I have the same issue.
My issue was ultimately twofold. I was connecting to Eloquence and SQL/R over a VPN connection which wasn't as stable as I thought and so connections were dropping as a result.
The other issue was that SQL/R uses Server instead of ServerName and Service instead of Port in the odbc.ini file.
Once I stabilized my VPN and fixed the odbc.ini file I was able to connect without issue.

FunkLoad monitor doesn't show any graphs in report

I did set up everything according to tutorial here http://funkload.nuxeo.org/monitoring.html , started monitor server, made bench test, builded report. But in report there are no added graphs from monitoring... Any idea? I am using credential server as well, but that was and is working correctly... its just that after i added monitor things, nothing seems to change...
monitor.conf
[server]
host = localhost
port = 8008
interval = .5
interface = eth0
[client]
host = localhost
port = 8008
my_test.conf:
[main]
title= some title
description= some descr
url=http://localhost:8000
... some other not important lines here
[monitor]
hosts=localhost
[localhost]
port=8008
description=The benching machine
use
sudo easy_install -f http://funkload.nuxeo.org/snapshots/ -U funkload
instead of just
pip install funkload
Looks like pip does have some old bad version of funkload

Using nxlog to ship logs in to logstash from Windows using om_ssl

I have been looking at options to ship logs from Windows, I have already got logstash set up, and I currently ship logs from Linux (CentOS) servers to my ELK stack using the logstash-forwarder and ssl encryption.
For compliance reasons encryption is pretty much essential in this environment.
I was hoping to use logstash-forwarder in Windows as well, but after compiling with Go I ran in to issues shipping Event Logs, and I found some people saying that it wasn't possible because of file locking issues, which the logstash-forwarder people appear to be working on, but I can't really wait.
Anyway, eventually I found out that nxlog seems to be able to ship logs in an encrypted format using ssl, I've found a few posts about similar topics and while I've learned quite a bit about how to ship the logs across and how to set up nxlog, I am still at a loss with how to set up logstash to accept the logs so I can process them.
I've asked in the #nxlog and #logstash irc channels, and got some confirmation in #nxlog that it is possible, no further information on how it should be configured.
Anyway, I have taken the crt file created for use with my logstash-forwarder (I will create a new one if needed when I am happy that this will work) and renamed it with a pem extension, which I believe should work as it is readable in ASCII format. I have created the environment variable for %CERTDIR% and put my file in there, I have written the following config file for nxlog from the other articles I have read, I think it is right, but I am not 100% sure:
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
# Enable json extension
<Extension json>
Module xm_json
</Extension>
# Nxlog internal logs
<Input internal>
Module im_internal
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
# Windows Event Log
<Input eventlog>
# Uncomment im_msvistalog for Windows Vista/2008 and later
Module im_msvistalog
# Uncomment im_mseventlog for Windows XP/2000/2003
# Module im_mseventlog
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5000
CertFile %CERTDIR%/logstash-forwarder.crt
AllowUntrusted TRUE
OutputType Binary
</Output>
<Route 1>
Path eventlog, internal => sslout
</Route>
What I want to know is what input format to use in logstash I have tried shipping logs in to a lumberjack input type (using the same config as my logstash-forwarders use) with the following config:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
But when the service started I get the following in the nxlog logfiles:
2014-11-06 21:16:20 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:20 INFO nxlog-ce-2.8.1248 started
2014-11-06 21:16:21 INFO successfully connected to lumberjack.domain.com:5000
2014-11-06 21:16:22 INFO remote closed SSL socket
2014-11-06 21:16:22 INFO reconnecting in 1 seconds
2014-11-06 21:16:23 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:24 INFO reconnecting in 2 seconds
2014-11-06 21:16:24 ERROR couldn't connect to ssl socket on lumberjack.antmarketing.com:5000; No connection could be made because the target machine actively refused it.
When I turned the logging up to DEBUG I see a massive amount of logs flying through, but I think the key part is:
2014-11-06 21:20:18 ERROR Exception was caused by "rv" at om_ssl.c:532/io_err_handler(); [om_ssl.c:532/io_err_handler()] -; [om_ssl.c:501/om_ssl_connect()] couldn't connect to ssl socket on lumberjack.domain.com:5000; No connection could be made because the target machine actively refused it.
I assume this points to me using the wrong input method on logstash, but I guess it could also be an issue with my ssl certs or the way it is configured. I don't appear to be getting any logs on the logstash server being generated at the time I make the connection from my Windows machine.
Thanks to b0ti for the help, there were a number of issues, my logstash config was crashing the service, but I also had issues with my nxlog setup as well as my ssl certs being set up in the correct way.
I found this post about creating ssl certs, which covers the way they are set up really nicely for self signed certs for use as a web service.
The main thing wrong with nxlog was as b0ti pointed out I was trying to ship in binary when that will only work when shipping to nxlog server. I also noticed in the docs that the default for AllowUntrusted is false, so I just had to delete it once I was happy ssl was working.
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5001
CAFile %CERTDIR%\nxlog-ca.crt
OutputType LineBased
</Output>
Creating the CA key, and secure it as this needs to be kept secret (cd to /etc/pki/tls):
certtool --generate-privkey --bits 2048 --outfile private/nxlog-ca.key
chown logstash:logstash private/nxlog-ca.key
chmod 600 private/nxlog-ca.key
And then Self Signed CA Cert, which will need to be transferred to your clients:
certtool --generate-self-signed --load-privkey private/nxlog-ca.key --bits 2048 --template nxlog-ca-rules.cnf --outfile certs/nxlog-ca.crt
The cnf file is standard only with this option modified:
# Whether this is a CA certificate or not
ca
The logstash input method:
input {
tcp {
port => 5001
type => "nxlogs"
ssl_cacert => "/etc/pki/tls/certs/nxlog-ca.crt"
ssl_cert => "/etc/pki/tls/certs/nxlog.crt"
ssl_key => "/etc/pki/tls/private/nxlog.key"
ssl_enable => true
format => 'json'
}
}
Generate the private key:
certtool --generate-privkey --bits 2048 --outfile private/nxlog.key
chown logstash:logstash private private/nxlog.key
chmod 600 private/nxlog.key
Generate the CSR (Certificate Signing Request):
certtool --generate-request --bits 2048 --load-privkey private/nxlog.key --outfile private/nxlog.csr
Sign the Cert with the CA private key
certtool --generate-certificate --bits 2048 --load-request private/nxlog.csr --outfile certs/nxlog.crt --load-ca-certificate certs/nxlog-ca.crt --load-ca-privkey private/nxlog-ca.key --template nxlog-rules.cnf
Again the only important part over the standard inputs for the cnf file will be:
# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key
# Whether this certificate will be used for a TLS client
tls_www_client
I've tested this and it works well, I just need to get the filters set up now
The binary data format is nxlog specific, you should only use it if you send to nxlog.
OutputType Binary
If this doesn't help, check the logstash logs since it's the remote end (logstash) which closes the connection.

Resources