send payara logs to graylog via syslog and set correct source - rsyslog

I have a graylog instance that's running a UDP-Syslog-Input on Port 1514.
It's working wonderfully well for all the system logs of the linux servers.
When I try to ingest payara logs though [1], the "source" of the message is set to "localhost" in graylog, while it's normally the hostname of the sending server.
This is suboptimal, because in the best case I want the application logs with correct source in graylog also.
I googled around and found:
https://github.com/payara/Payara/blob/payara-server-5.2021.5/nucleus/core/logging/src/main/java/com/sun/enterprise/server/logging/SyslogHandler.java#L122
It seems like the syslog "source" is hard-coded into payara (localhost).
Is there a way to accomplish sending payara-logs with the correct "source" set?
I have nothing to do with the application server itself, I just want to receive the logs with the correct source (the hostname of the sending server).
example log entry in /var/log/syslog for payara
Mar 10 10:00:20 localhost [ INFO glassfish ] Bootstrapping Monitoring Console Runtime
I suspect I want the "localhost" in above example set to fqdn of the host.
Any ideas?
Best regards
[1]
logging.properties:com.sun.enterprise.server.logging.SyslogHandler.useSystemLogging=true

Try enabling "store full message" in the syslog input settings.
That will add the full_message field to your log messages and will contain the header, in addition to what you see in the message field. Then you can see if the source IP is in the UDP packet. If so, collect those messages via a raw/plaintext UDP input and the source should show correctly.
You may have to parse the rest of the message via an extractor or pipeline rule, but at least you'll have the source....

Well,
this might not exactly be a good solution but I tweaked the rsyslog template for graylog.
I deploy the rsyslog-config via Puppet, so I can generate "$YOURHOSTNAME-PAYARA" dynamically using the facts.
This way, I at least have the correct source set.
$template GRAYLOGRFC5424,"<%PRI%>%PROTOCOL-VERSION% %TIMESTAMP:::date-rfc3339% YOURHOSTNAME-PAYARA %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\n"
if $msg contains 'glassfish' then {
*.* #loghost.domain:1514;GRAYLOGRFC5424
& ~
} else {
*.* #loghost.domain:1514;RSYSLOG_SyslogProtocol23Format
}
The other thing we did is actually activating application logging through log4j and it's syslog appender:
<Syslog name="syslog_app" appName="DEMO" host="loghost" port="1514" protocol="UDP" format="RFC5424" facility="LOCAL0" enterpriseId="">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex"/>
</LoggerFields>
</Syslog>
This way, we can ingest the glassfish server logs and the independent application logs into graylog.
The "LoggerFields" in log4j.xml appear to be key-value pairs for the "StructuredDataElements" according to RFC5424.
https://logging.apache.org/log4j/2.x/manual/appenders.html
https://datatracker.ietf.org/doc/html/rfc5424

That's the problem with UDP Syslog. The sender gets to set the source in the header. There is no "best answer" to this question. When the information isn't present, it's hard for Graylog to pass it along.
It sounds like you may have found an answer that works for you. Go with it. Using log4j solves two problems and lets you define the source yourself.
For those who face a similar issue, a simpler way to solve the source problem might be to use a static field. If you send the payara syslog messages to their own input, you can create a static field that could substitute for the source to identify traffic from that source. Call it "app_name" or "app_source" or something and use that field for whatever sorting you need to do.
Alternatively, if you have just one source for application messages, you could use a pipeline to set the value of the source field to the IP or FQDN of the payara server. Then it displays like all the rest.

Related

Delete Host information from Elasticsearch

I upload some logs into elastic via filebeat, but there is some other information added to my original logs like the host name ,os kernel and other information about host..., and the main message become unformatted, i want to delete all the field that are unnecessary and only keep my original message in the initial form.
I have tried to delete add_host_metadata from filebeat.yml but the problem still persist.
I'm working with elk on windows.
You could use the include_fields processor enter link description here or what you could do is use the drop_fields for the fields you don’t need. Filebeat will sometimes add in fields such as host, or log, which can be dropped. There are some
That can’t be dropped though.

Piping / Filtering Windows DNS Server logs

I am looking to log all the unique hosts which have had any transaction with my Windows DNS Server.
I found that there is an option to log my DNS server transactions via the Set-DnsServerDiagnostics PS command.
However - it is quite heavy and I am not interested in most of the data there. I just care about the host name, for example www.google.com
I was wondering if there's an option to create a File pipe which consumes the log data, and filters it - resulting in a file which contains domain names only.
I saw that I could specify the file path with the -LogFilePath argument - it may help.
Any help / ideas will be appreciated!

Rsyslog forwarding over HTTP

I would like rsyslog to forward log messages via HTTP to the service which will process them.
I don't see the exact http-forwarding module for Rsyslog, and I don't want to create another listener on another port just for handling incoming TCP connections, as it would be required with the TCP-output module.
Is it possible or what are the alternatives to process Rsyslog messages by HTTP handler?
Since rsyslog version 8.2202, there is the omhttp module.
Here is an example of what you'd need to implement in /etc/rsyslog.conf:
# include the omhttp module
module(load="omhttp")
# template for each indivdual message, not the format of the resulting batch
template(name="tpl_omhttp_forwarding" type="string" string="%msg%")
# action to send ALL log (files) messages via http
*.* {
action(
type="omhttp"
server="192.1.1.1"
serverport="443"
template="tpl_omhttp_forwarding"
batch="on"
batch.format="jsonarray"
batch.maxsize="10"
action.resumeRetryCount="-1"
)
}
All the action parameters for omhttp, you can find here
NOTE:
Depending on the OS you're using, you may need to build it yourself, or use the repositories here.
For some platforms, there is no omhttp package because of missing or too old dependencies.
There is a new output module called omhttp. I'm looking into it as well, but am having difficulty finding documentation.
https://github.com/rsyslog/rsyslog/issues/3024
Edit: Updated docs are here
https://www.rsyslog.com/doc/v8-stable/configuration/modules/omhttp.html#message-batching

Logging for two different environment logs in to a single log file

I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.

Tomcat on windows log shipping to logstash

I am trying to configure log shipping/ consolidation using logstash. My tomcat servers run on Windows. I am running into a few problems with my configuration - Tomcat on windows, logging using log4j, redis consolidator/ elasticsearch/ logstash/ kibana running on a single linux server.
Fewer log shippers available on Windows. It looks like nxlog does not work with redis out of the box. So, I have reverted to using logstash to ship. I would like to learn what others prefer to use
Rather use custom appenders I would rather have tomcat use log4j to log to file and then feed the file as input to be shipped to Redis. I don't want to the log formats.
No json-event format for me - http://spredzy.wordpress.com/2013/03/02/monitor-your-cluster-of-tomcat-applications-with-logstash-and-kibana/. I can't seem to get the right file config in the shipper.conf
Any sample config for log4j files - fed to logstash via redis would help.
Thanks
I'm currently writing a Java library to send logs to Logstash using ZeroMQ (no central redis broker required).
Disclaimer: it's not quite perfect yet, but may be worth keeping an eye on.
https://github.com/stuart-warren/logit
You can setup the standard juli log configuration (or log4j if you are using that), plus with the tomcat-valve jar you can send access logs as well by configuring the server.xml.
It does however send it in json-event format by default.
I'm confused as to why you wouldn't want to save all the processing on the Logstash server? You can (and currently probably should) log to file in standard format as well.
logging.properties file.
# "handlers" specifies a comma separated list of log Handler
# classes. These handlers will be installed during VM startup.
# Note that these classes must be on the system classpath.
# By default we only configure a ConsoleHandler, which will only
# show messages at the INFO and above levels.
handlers= com.stuartwarren.logit.jul.ZmqAppender
# handlers= com.stuartwarren.logit.jul.ZmqAppender, java.util.logging.ConsoleHandler
# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers. For any given facility this global level
# can be overriden by a facility-specific level.
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.
.level=INFO
# Limit the messages that are printed on the console to INFO and above.
com.stuartwarren.logit.jul.ZmqAppender.level=INFO
com.stuartwarren.logit.jul.ZmqAppender.socketType=PUSHPULL
com.stuartwarren.logit.jul.ZmqAppender.endpoints=tcp://localhost:2120
com.stuartwarren.logit.jul.ZmqAppender.bindConnect=CONNECT
com.stuartwarren.logit.jul.ZmqAppender.linger=1000
com.stuartwarren.logit.jul.ZmqAppender.sendHWM=1000
com.stuartwarren.logit.jul.ZmqAppender.layout=com.stuartwarren.logit.jul.Layout
com.stuartwarren.logit.jul.Layout.layoutType=logstashv1
com.stuartwarren.logit.jul.Layout.detailThreshold=WARNING
com.stuartwarren.logit.jul.Layout.tags=tag1,tag2,tag3
com.stuartwarren.logit.jul.Layout.fields=field1:value1,field2:value2,field3:value3
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
server.xml
<Valve className="com.stuartwarren.logit.tomcatvalve.ZmqAppender"
layout="com.stuartwarren.logit.tomcatvalve.Layout"
socketType="PUSHPULL"
endpoints="tcp://localhost:2120"
bindConnect="CONNECT"
linger="1000"
sendHWM="1000"
layoutType="logstashv1"
iHeaders="Referer,User-Agent"
oHeaders=""
cookies=""
tags="tag1,tag2,tag3"
fields="field1:value1,field2:value2,field3:value3" />

Resources