Logs don't match any of my conditions but it should - rsyslog

First of all this is the informations about my architecture :
Software : Rsyslog v8.24
OS : Debian 9.13
File : /etc/rsyslog.d/splunk.conf
File language : advanced or RainerScript
I have these 3 lines in my file :
# Aruba Networks logs filtering
ruleset(name="ArubaNetworksPort") {
if (re_match($msg, "AP:aaa-bbb01-ccc-ap")) then {
action(type="omfile" dynaFile="ArubaNetworksPath")
}
# VMware ESX logs filtering
ruleset(name="EsxPort") {
if (re_match($hostname, "tree-[a-zA-Z]{3}to[0-9]{3}")) then {
action(type="omfile" dynaFile="EsxPath")
}
}
# Unclassified logs filtering
ruleset(name="RemoteLogPort") {
*.* action(type="omfile" dynaFile="RemoteLogPath")
}
template (name="ArubaNetworksPath" type="string" string="/var/log/rsyslog/aruba-networks/%FROMHOST%/aruba-networks.log")
template (name="EsxPath" type="string" string="/var/log/rsyslog/esxvmware/%FROMHOST%/esxvmware.log")
template (name="RemoteLogPath" type="string" string="/var/log/remote/unclassified/%FROMHOST%/unclassified.log")
input(type="imudp" port="514" ruleset="ArubaNetworksPort")
input(type="imudp" port="514" ruleset="EsxPort")
input(type="imudp" port="514" ruleset="RemoteLogPort")
And when I directly check the logs I see that in the message or the hostname of the listeners it matches with my filters meanwhile the logs go to the "RemoteLogPath" instead of "ArubaNetworksPath" or "EsxPath".
Any idea what's going on ? I can provide stuff if you need some informations, just ask me.

You cannot bind the same input to multiple rulesets. See for example this issue.
You probably just want something like this:
ruleset(name="RemoteLogPort") {
if (re_match($msg, "AP:aaa-bbb01-ccc-ap")) then {
action(type="omfile" dynaFile="ArubaNetworksPath")
} else if (re_match($hostname, "tree-[a-zA-Z]{3}to[0-9]{3}")) then {
action(type="omfile" dynaFile="EsxPath")
} else {
action(type="omfile" dynaFile="RemoteLogPath")
}
}
input(type="imudp" port="514" ruleset="RemoteLogPort")

Related

Logstash content based filtering, into multiple indexs

I am currently pulling JSON log files from an S3 bucket which contain different types of logs defined as RawLog, along with another value which is MessageSourceType (there are more metadata fields which I don't care about). Each line on the file is a separate log in case that makes a difference.
I currently have these all going into 1 index as seen in my config below, however, I ideally want to split these out into separate indexes. For example, if the MessageSourceType = Syslog - Linux Host then I need logstash to extract the RawLog as syslog and place it into an index called logs-syslog, whereas if the MessageSourceType = MS Windows Event Logging XML I want it to extract the RawLog as XML and place it in an index called logs-MS_Event_logs.
filter {
mutate {
replace => [ "message", "%{message}" ]
}
json {
source => "message"
remove_field => "message"
}
}
output {
elasticsearch {
hosts => ["http://xx.xx.xx.xx:xxxx","http://xx.xx.xx.xx:xxxx"]
index => "logs-received"
}
Also for a bit of context here is an example of one of the logs:
{"MsgClassTypeId":"3000","Direction":"0","ImpactedZoneEnum":"0","message":"<30>Feb 13 23:45:24 xx.xx.xx.xx Account=\"\" Action=\"\" Aggregate=\"False\" Amount=\"\" Archive=\"True\" BytesIn=\"\" BytesOut=\"\" CollectionSequence=\"825328\" Command=\"\" CommonEventId=\"3\" CommonEventName=\"General Operations\" CVE=\"\" DateInserted=\"2/13/2021 11:45:24 PM\" DInterface=\"\" DIP=\"\" Direction=\"0\" DirectionName=\"Unknown\" DMAC=\"\" DName=\"\" DNameParsed=\"\" DNameResolved=\"\" DNATIP=\"\" DNATPort=\"-1\" Domain=\"\" DomainOrigin=\"\" DPort=\"-1\" DropLog=\"False\" DropRaw=\"False\" Duration=\"\" EntityId=\"" EventClassification=\"-1\" EventCommonEventID=\"-1\" FalseAlarmRating=\"0\" Forward=\"False\" ForwardToLogMart=\"False\" GLPRAssignedRBP=\"-1\" Group=\"\" HasBeenInserted_EMDB=\"False\" HasBeenQueued_Archiving=\"True\" HasBeenQueued_EventProcessor=\"False\" HasBeenQueued_LogProcessor=\"True\" Hash=\"\" HostID=\"44\" IgnoreGlobalRBPCriteria=\"False\" ImpactedEntityId=\"0\" ImpactedEntityName=\"\" ImpactedHostId=\"-1\" ImpactedHostName=\"\" ImpactedLocationKey=\"\" ImpactedLocationName=\"\" ImpactedNetworkId=\"-1\" ImpactedNetworkName=\"\" ImpactedZoneEnum=\"0\" ImpactedZoneName=\"\" IsDNameParsedValue=\"True\" IsRemote=\"True\" IsSNameParsedValue=\"True\" ItemsIn=\"\" ItemsOut=\"\" LDSVERSION=\"1.1\" Login=\"\" LogMartMode=\"13627389\" LogSourceId=\"158\" LogSourceName=\"ip-xx-xx-xx-xx.eu-west-2.computer.internal Linux Syslog\" MediatorMsgID=\"0\" MediatorSessionID=\"1640\" MsgClassId=\"3999\" MsgClassName=\"Other Operations\" MsgClassTypeId=\"3000\" MsgClassTypeName=\"Operations\" MsgCount=\"1\" MsgDate=\"2021-02-13T23:45:24.0000000+00:00\" MsgDateOrigin=\"0\" MsgSourceHostID=\"44\" MsgSourceTypeId=\"88\" MsgSourceTypeName=\"Syslog - Linux Host\" NormalMsgDate=\"2021-02-13T23:45:24.0540000Z\" Object=\"\" ObjectName=\"\" ObjectType=\"\" OriginEntityId=\"0\" OriginEntityName=\"\" OriginHostId=\"-1\" OriginHostName=\"\" OriginLocationKey=\"\" OriginLocationName=\"\" OriginNetworkId=\"-1\" OriginNetworkName=\"\" OriginZoneEnum=\"0\" OriginZoneName=\"\" ParentProcessId=\"\" ParentProcessName=\"\" ParentProcessPath=\"\" PID=\"-1\" Policy=\"\" Priority=\"4\" Process=\"\" ProtocolId=\"-1\" ProtocolName=\"\" Quantity=\"\" Rate=\"\" Reason=\"\" Recipient=\"\" RecipientIdentity=\"\" RecipientIdentityCompany=\"\" RecipientIdentityDepartment=\"\" RecipientIdentityDomain=\"\" RecipientIdentityID=\"-1\" RecipientIdentityTitle=\"\" ResolvedImpactedName=\"\" ResolvedOriginName=\"\" ResponseCode=\"\" Result=\"\" RiskRating=\"0\" RootEntityId=\"9\" Sender=\"\" SenderIdentity=\"\" SenderIdentityCompany=\"\" SenderIdentityDepartment=\"\" SenderIdentityDomain=\"\" SenderIdentityID=\"-1\" SenderIdentityTitle=\"\" SerialNumber=\"\" ServiceId=\"-1\" ServiceName=\"\" Session=\"\" SessionType=\"\" Severity=\"\" SInterface=\"\" SIP=\"\" Size=\"\" SMAC=\"\" SName=\"\" SNameParsed=\"\" SNameResolved=\"\" SNATIP=\"\" SNATPort=\"-1\" SPort=\"-1\" Status=\"\" Subject=\"\" SystemMonitorID=\"9\" ThreatId=\"\" ThreatName=\"\" UniqueID=\"7d4c4ed3-a2fc-44bc-a7ec-0b8b68e7f456\" URL=\"\" UserAgent=\"\" UserImpactedIdentity=\"\" UserImpactedIdentityCompany=\"\" UserImpactedIdentityDomain=\"\" UserImpactedIdentityID=\"-1\" UserImpactedIdentityTitle=\"\" UserOriginIdentity=\"\" UserOriginIdentityCompany=\"\" UserOriginIdentityDepartment=\"\" UserOriginIdentityDomain=\"\" UserOriginIdentityID=\"-1\" UserOriginIdentityTitle=\"\" VendorInfo=\"\" VendorMsgID=\"\" Version=\"\" RawLog=\"02 13 2021 23:45:24 xx.xx.xx.xx <SYSD:INFO> Feb 13 23:45:24 euw2-ec2--001 metricbeat[3031]: 2021-02-13T23:45:24.264Z#011ERROR#011[logstash.node_stats]#011node_stats/node_stats.go:73#011error making http request: Get \\\"https://xx.xx.xx.xx:9600/\\\": dial tcp xx.xx.xx.xx:9600: connect: connection refused\"","CollectionSequence":"825328","NormalMsgDate":"2021-02-13T23:45:24.0540000Z"}
I am a little unsure of the best way to achieve this and thought you guys might have some suggestions. I have looked into grok and think this may achieve my objective however I'm unsure where to start.
You can do this with conditionals in your filter section and define the target index according to the type of logs you're parsing.
filter {
... other filters ...
if [MsgSourceTypeName] == "Syslog - Linux Host" {
mutate {
add_field => {
"[#metadata][target_index]" => "logs-syslog"
}
}
}
else if [MsgSourceTypeName] == "MS Windows Event Logging XML" {
mutate {
add_field => {
"[#metadata][target_index]" => "logs-ms_event_log"
}
}
}
}
output {
elasticsearch {
hosts => ["http://xx.xx.xx.xx:xxxx","http://xx.xx.xx.xx:xxxx"]
index => "%{[#metadata][target_index]}"
}
}

How to Filter log based upon log severity in Rsyslog?

I an newbie in rsyslog, i able to get the log from client to server. But I need to divide this as per log severity (means INFO,ERROR, WARN) like this
Try this to add in your rsyslog.conf file in server side
module(load="imuxsock") # provides support for local system logging
#module(load="immark") # provides --MARK-- message capability
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="50514" ruleset="remote")
Ruleset (name="remote"){
# action (type="omfile" file="/var/log/jvh.log")
if $msg contains 'ERROR' then {
action (type="omfile" file="/var/log/jvhErr.log")
}else if $msg contains 'INFO' then {
action(type="omfile" file="/var/log/jvhInfo.log")
}else {
action(type="omfile" file ="/var/log/jvhOther.log")
}
}

Multiple filebeat to one logstash. How to optimize the configuration

I have 10 servers that i have Filebeat installed in.
Each server monitors 2 applications, a total of 20 applications.
I have one Logstash server which collects all the above logs and passes it to Elasticsearch after filtering of these logs.
To read one file from one server, I use the below Logstash configuration:
input {
beats {
port => 5044
}
}
filter {
grok {
match => {"message" =>"\[%{TIMESTAMP_ISO8601:timestamp}\]%{SPACE}\[%{DATA:Severity}\]%{SPACE}\[%{DATA:Plugin}\]%{SPACE}\[%{DATA:Servername}\](?<short_message>(.|\r|\n)*)"}
}
}
output {
elasticsearch {
hosts => ["<ESserverip>:9200"]
index => "groklogs"
}
stdout { codec => rubydebug }
}
And this is the filebeat configuration:
paths:
- D:\ELK 7.1.0\elasticsearch-7.1.0-windows-x86_64\elasticsearch-7.1.0\logs\*.log
output.logstash:
hosts: ["<logstaship>:5044"]
Can anyone please give me an example of
How i should convert the above to receive from multiple applications
from multiple servers.
Should i configure multiple ports? How?
How should i use multiple Groks?
How can i optimize it in a single or minimal logstash configuration files?
How will a typical set up look. Please help me.
You can use tags in order to differentiate between applications (logs patterns).
As Filebeat provides metadata, the field beat.name will give you the ability to filter the server(s) you want.
Multiple inputs of type log and for each one a different tag should be sufficient.
See these examples in order to help you.
Logstash
filter {
if "APP1" in [tags] {
grok {
...
}
}
if "APP2" in [tags] {
grok {
...
}
}
}
Filebeat
filebeat.inputs:
- type: log
paths:
- /var/log/system.log
- /var/log/wifi.log
tags: ["APP1"]
- type: log
paths:
- "/var/log/apache2/*"
tags: ["APP2"]

logstash:how to use environment variables in input host

I wanna print 'host source' to output. For this goal, local or global variables is necessary. But I wanna not use the global variables like 'export ...'.
So before the input{}, I put the host in metadata then use in 'input{}'.
Like below:
filter{
environment{
add_field =>{
"[#metadata][TEMP]" => "127.0.0.1"
}
}
}
input{
udp{
host => "%{[#metadata][TEMP]}"
port => "10000"
}
}
output{
udp{
host => "127.0.0.1"
port => "10001"
}
}
But logstash is not running then log is like below:
[WARN ][logstash.inputs.udp ] UDP listener died {:exception=>#<SocketError: bind: name or service not known>known>
So how can solve this problem??
Let me try the answer your question in two steps.
The error message
Your config file is malformed. The workflow is always like this:
# This is a comment. You should use comments to describe
# parts of your configuration.
input {
...
}
filter {
...
}
output {
...
}
That is why you get the error message, your filter is in the wrong place and not applied before the input.
Multiple input sources
If you want to add information to your events depending on which input is used, you can add a type during input handling. Here is an example config file:
input {
file {
type => "file"
path => "/var/log/some_name.log"
}
udp{
type => "udp"
host => "127.0.0.1"
port => "10001"
}
}
filter {
# can be omitted, if not used
}
output {
udp{
host => "127.0.0.1"
port => "10001"
}
}
The type is stored as part of the event itself, so you can also use the type to search for it in Kibana.

How it is possible to passthrough file in logstash?

I want to passthrough a file in logstash for debugging purposes. Is it possible somehow?
If i use following config to stdout stdin input, it works fine:
input { stdin { } }
output { stdout { } }
But when i want to do the same for file, it does not work:
input {
file {
path => ["/home/logstash/xunit.json"]
}
}
output {
stdout { }
}
I only see following warning, and nothing more:
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
What i am doing wrong?
P.S. If using verbose mode, i see following lines, then everything hangs:
Registering file input {:path=>["/home/logstash/xunit.json"], :level=>:info}
Pipeline started {:level=>:info}
P.P.S. User has access to file.

Resources