rsyslog if then clause fails when an action provided - rsyslog

I want to filter some logs from clients and push them to kafka, however, it didn't work, log config file:
module(load="omkafka")
module(load="imtcp" streamdriver.mode="1" streamdriver.authmode="anon")
input(type="imtcp" port="10514")
if $msg contains 'topic-*' then action(type="omkafka" topic="topic" broker=["10.3.1.9:9092", "10.3.1.8:9092", "10.3.1.7:9092", "10.3.1.6:9092"])
when I remove the if...then... clause, it works.

Related

rsyslogd does not write data to logfile when configured with TLS

I'm trying to set up rsyslog with TLS to forward specific records from /var/log/auth.log from host A to a remote server B.
The configuration file I wrote for rsyslog is the following:
$DefaultNetstreamDriverCAFile /etc/licensing/certificates/ca.pem
$DefaultNetstreamDriverCertFile /etc/licensing/certificates/client-cert.pem
$DefaultNetstreamDriverKeyFile /etc/licensing/certificates/client-key.pem
$InputFilePollInterval 10
#Read from the auth.log file and assign the tag "ssl-auth" for its messages
input
(type="imfile"
File="/var/log/auth.log"
reopenOnTruncate="on"
deleteStateOnFileDelete="on"
Tag="ssl-auth")
$template auth_log, " %msg% "
# Send ssl traffic to server on port 514
if ($syslogtag == 'ssl-auth') then{action
(type="omfwd"
protocol="tcp"
target="<ip#server>"
port="514"
template="auth_log"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="x509/name"
)}
Using this configuration, when I try to ssh-login the first time into the host A from another host X everything works fine; the file /var/log/auth.log is written and the tcpdump shows traffic towards server B.
But from then on, it does not work anymore.
Even if I try to exit from host A and login back again whenever I do, the file /var/log/auth.log is not ever written and no traffic appears over tcpdump.
The very strange things is that if I remove the TLS from the configuration it works.

gcloud cli failing to add record when contents start with dash

I'm working with the LetsEncrypt dns-01 challenge system which entails dynamically creating a TXT record in Google Cloud DNS with specific content, so LE can assert proof of ownership for generating a wildcard certificate (so I can't use http-01). The problem is sometimes LE tells me to create a TXT record that starts with a "-", for example -E_DFDFHJKF1783FSHDJ. I cannot get the gcloud cli to properly accept this data no matter what I do.
Example:
gcloud dns record-sets transaction start --zone=myzone
gcloud dns record-sets transaction add "-E_ASDFSDF" --ttl=30 --zone=myzone --name=test --type=TXT
gcloud dns record-sets transaction remove "-A_DSFKHSDF" --ttl=30 --zone=myzone --name=test2 --type=TXT
If you run those commands and inspect the resulting transaction.yaml you can see whether it properly contains the right string. If it did it correct, you should see something like:
- kind: dns#resourceRecordSet
name: test.
rrdatas:
- '"ASDFASDF"'
ttl: 30
type: TXT
I am executing this via Node's child_process, but I have the issue even if I execute it directly from bash, so Node isn't really meaningful issue at the moment. I've tried echoing the value in. I've tried setting an environment variable and using that in the string.
No matter what I do I get an error like the following:
ERROR: (gcloud.dns.record-sets.transaction.add) unrecognized arguments: -E_ASDFSDF
It turns out some characters need to be escaped in the CLI. I can confirm that the following works:
gcloud dns --project=myprojectid record-sets transaction add "\-test123" --name=test.mydomain.com. --ttl=300 --type=TXT --zone=myzoneid

Orphaned SYSTEM.MANAGED.DURABLE.* queue in Websphere MQ

I have a queue 'SYSTEM.MANAGED.DURABLE.ABCD***109' getting messages all the time and no one to consume it.
I tried to get its subscription but got the following result ,
dis sub(*) where (DEST LK 'SYSTEM.MANAGED.DURABLE.ABCD***109')
AMQ8096: IBM MQ subscription inquired.
SUBID(414D5120******************44A0109)
SUB(false)
DEST(SYSTEM.MANAGED.DURABLE.ABCD***44A0108)
then i tried to view the subscription via the subscription id listed,
dis sbstatus(*) where ( SUBID EQ '414D5120***44A0109')
AMQ8099: IBM MQ subscription status inquired.
SUB(false)
SUBID(414D5120***44A0109)
I don't have a subscription named "false" . I'm unable to clear or delete this queue as it is opened. I'm unable to view the open connection as well.
dis conn(*) where (objname eq 'SYSTEM.MANAGED.DURABLE.ABCD***44A0108')
AMQ8461: Connection identifier not found.
I need to cleanup & delete this queue to avoid disk space issue.
You can remove SUB objects with only the SUBID, try to remove it with this command:
DELETE SUB SUBID('414D5120***44A0109')
Note that the command is not specifying the SUB name, just the SUB keyword.
Before you delete it, if you are interested in seeing what the sub name actually is, you may want to try running the following command to dump the subscriptions:
amqldmpa -m <QueueManager> -c T -f /var/mqm/errors/amqldmpa_topic.out
Inside of the file /var/mqm/errors/amqldmpa_topic.out search for the SUBID in question and look for text similar to this:
Subscriber entry
{
SubId ( 414D5120***44A0109)
SubNameString ( SUBNAME_HERE )
TopicString ( TOPIC/STRING/HERE )
<more lines of information go here>
}
What does it show for the SubNameString field? Note that in the 8.0.0.6 version I ran this against it seems to pad each field with a leading and trailing space with the exception of SubId which did not have a trailing space.

After deleting an index from logstash, can we point to the same log file again for another index?

I had deleted an index. It contained logs from a certain file. Even after deletion of that index why doesn't logstash/elasticsearch read the same log file while creating a new index? And who does the role of reading the logs- ES or LS?
Logstash reads your logs and puts them into elasticsearch. There is something called a sincedb that Logstash uses to keep track of what files it has already processed. If you remove it and restart logstash it should reprocess all of your logs.
If there is a specific log you want to reparse, the easiest way to do it is to do this:
mv logfile logfile.copy
cp logfile.copy logfile
rm logfile.copy
This gives it a new inode and makes logstash think it is a new log.

Rsyslog's filter is unable to filter specified logs

I have logs being forwarded to my syslog server, I have built a filter in rsyslog.conf file that should put the logs into a separate logfile if it contains "username". Unfortunately it doesn't seems to be working, the filter I use is:
if ($fromhost-ip == '192.x.x.x.' and $msg contains 'Username' and $msg contains 'test') then /var/log/new.log;RFC3164fmt
Thanks for you help.

Resources