So, I'm building a full cloud solution using kubernetes and spring boot.
My spring boot application is deployed to a container and logs directly on the console.
As containers are ephemerals I'd like to send logs also to a remote logstash server, so that they can be processed and sent to elastic.
Normally I would install a filebeat on the server hosting my application, and I could, but isn't there any builtin method allowing me to avoid writing my log on a file before sending it?
Currently I'm using log4j but I see no problem in switching to another logger as long it has a "logbackappender".
You can try to add logback.xml in resources folder :
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration scan="true">
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<param name="Encoding" value="UTF-8"/>
<remoteHost>localhost</remoteHost>
<port>5000</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"app_name":"YourApp", "app_port": "YourPort"}</customFields>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="logstash"/>
</root>
</configuration>
Then add logstash encoder dependency :
pom.xml
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.11</version>
</dependency>
logstash.conf
input {
udp {
port => "5000"
type => syslog
codec => json
}
tcp {
port => "5000"
type => syslog
codec => json_lines
}
http {
port => "5001"
codec => "json"
}
}
filter {
if [type] == "syslog" {
mutate {
add_field => { "instance_name" => "%{app_name}-%{host}:%{app_port}" }
}
}
}
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
index => "logs-%{+YYYY.MM.dd}"
}
}
I've just created a full working example in my repository
Hope to be helpful for someone
Related
I'm using Spring Boot and would like to have this code:
LOG.info("Scheduled appointment for user 12345 [appointment ID 100]");
Produce the following log message in JSON GELF format:
{
"version": "1.1",
"host": "hostname.ec2.internal",
"short_message": "Scheduled appointment for user 12345 [appointment ID 100]",
"timestamp": 1318280136,
"level": 1,
"_user": "user#acme.com",
"_clientip": "127.0.0.1",
"_env": "prod",
"_app":"scheduler"
}
Do I need to create my own logger for this or can I customize Logback/Log4j2 to behave this way?
From a Log4j 2.x perspective, you can use the JSON Layout Template, which has a built-in eventTemplate for GELF.
Your appender configuration in the log4j2-spring.xml file would look like:
<Console name="CONSOLE">
<JsonTemplateLayout eventTemplateUri="classpath:GelfLayout.json" />
</Console>
Remark: Since Spring Boot uses Logback as default logging system, you'll have to exclude the spring-boot-starter-logging and replace it with spring-boot-starter-log4j2.
Moreover the JSON Layout Template requires an additional dependency:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-layout-template-json</artifactId>
</dependency>
I installed ELK stack on my windows 10 machine. I used log4net to push logs to logstash -> elasticsearch. The logs data is displayed in Kibana and everything is fine. My logstach config is:
input {
udp {
port => 5960
codec => multiline {
charset => "UTF-8"
pattern => "^(DEBUG|WARN|ERROR|INFO|FATAL)"
negate => true
what => previous
}
type => "log4net"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "myindex"
}
}
When I try to search for a keyword that exists in the message text (using the search input and the date is set to 3 months ago) I get:
"No results match your search criteria"
Note: If I use: stdin {} instead of udp{} in logstash config I can search for any keyword.
I reinstalled the stack in another machine and the same issue happened.
Any suggestions?
I found the solution:
The problem was with the encoding data coming from log4net. So you need to set the log4net config file as following for udp appender:
<appender name="UdpAppender" type="log4net.Appender.UdpAppender">
<remoteAddress value="127.0.0.1" />
<remotePort value="5960" />
<encoding value="UTF-8" />
<layout type="log4net.Layout.PatternLayout, log4net">
<conversionPattern value="%-5level %date [%-5.5thread] %-40.40logger - %message%newline" />
</layout>
i have 3 node setup
10.x.x.1 - application and filebeat
10.x.x.2 - machine for parsing and logstash
10.x.x.3 - having centralized logstash node from where we need to push messages into Elastic Search
in 10.x.x.2 when i set the output codec to stdout , i can see the messages coming from 10.x.x.1.
Now, i need to forward all the json messages from 10.x.x.2 to 10.x.x.3 . I tried using TCP. But the messages are not gettting sent.
10.x.x.2 logstash conf file
input {
beats {
port => 5045
}
}
output{
#stdout { codec => rubydebug }
tcp{
host => "10.x.x.3"
port => 3389
}
10.x.x.3 logstash conf file
input{
tcp{
host => "10.x.x.3"
port => 3389
#mode => "server"
#codec => "json"
}
}
output{
stdout{ codec => rubydebug }
}
is there any plugin which can send json data from one logstash to another logstash server
Your config should work.
But you have to be carreful with the "codec" properties.
Try first to set it to "line" on the output AND the input plugins of the two logstash.
And see if log are incoming.
With the codec set to "line" you will have logicly no problem to forward the logs.
Then work on the "json" properties.
Do not forget that you can activate the debug mode of logstash with the argument --debug and you can log with the arguments : -l logFileName
When you start to work with the codec json look for "_jsonparsefailure" tags, which could explain why it do not transfert logs between the two logstash.
I just built an ELK server on Windows so I'm new to the process. I've read through the docs but am having trouble parsing out my IIS advanced logs, especially x-forwarded-for data as we're behind a load balancer..
My advanced logging is set up to output the data like this:
$date, $time, $s-ip, $cs-uri-stem, $cs-uri-query, $s-port, $cs-username, $c-ip, $X-Forwarded-For, $csUser-Agent, $cs-Referer, $sc-status, $sc-substatus, $sc-win32-status, $time-taken
I set up my logstash.conf like this:
input {
tcp {
host => "localhost"
type => "iis"
port => 5044
}
}
filter {
if [type] == "iis" {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp} %{IPORHOST:site} %{URIPATH:page} %{NOTSPACE:query_string} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:client_host} %{NOTSPACE:useragent} %{NOTSPACE:referer} %{GREEDYDATA:response} %{NUMBER:httpStatusCode:int} %{NUMBER:scSubstatus:int} %{NUMBER:scwin32status:int} %{NUMBER:timeTakenMS:int}"}
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "iis"
document_type => "main"
}
}
I don't think this is correct as I'm not getting data. I've scoured the docs but am still having issues and am not sure if there are other steps I need to take, like mapping the fields.
I'm currently using filebeat from one server to push data to my ELK server. I'm not sure if this is the best way as well (maybe nxlog?). We don't want to install logstash on the client machines.
Can someone lend me a hand? It would be GREATLY appreciated!!
Thanks,
George
Since you are using Filebeat then you need to use the beats input and not the tcp input. See the documentation on how to setup Logstash for Beats.
Essentially you need to replace your tcp input with:
input {
beats {
port => 5044
}
}
And inside your Filebeat configuration file, set the document_type to iis so that your filter condition will match.
filebeat:
prospectors:
- paths:
- 'C:\path\to\your\iis\logs\*.log'
document_type: iis
I installed ELK on a ubuntu server 14.04. And now I wanted to send to this all my jboss sever logs (using log4j).
logstash configuration :
input conf file :
input {
log4j {
type => "log4j"
port => 5000
}
}
filter conf file :
filter {
if [type] == "log4j" {
grok {
match => {"message" => MY_GROK_PARSE}
}
}
}
and the output file :
output {
elasticsearch {
embedded => true
}
}
And to finish the log4j appender:
<appender name="LOGSTASH" class="org.apache.log4j.net.SocketAppender">
<param name="Port" value="5000"/>
<param name="RemoteHost" value="XXX.XXX.XXX.XXX"/> <!-- There is a real adress here ;-) -->
<param name="ReconnectionDelay" value="50000"/>
<param name="LocationInfo" value="true"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %-5p [%c{1}] %m%n" />
</layout>
</appender>
But nothing happens with this configuration. So I don't know what I misunderstand.
My other appenders (console and local file) work fine.
The elasticsearch log show any information/activity.
Edit :
More about my jboss-log4j.xml:
<appender name="Async" class="org.apache.log4j.AsyncAppender">
<appender-ref ref="FILE" />
<appender-ref ref="CONSOLE" />
<appender-ref ref="LOGSTASH" />
</appender>
<root>
<priority value="INFO" />
<appender-ref ref="Async" />
</root>
I know it's an old post, but someone may find it useful - log4j SocketAppender can't use layout, see docs for SocketAppender
SocketAppenders do not use a layout. They ship a serialized LoggingEvent object to the server side.
You also don't need additional filter in logstash configuration. Logstash log4j plugin minimal configuration is sufficient
input {
log4j {
data_timeout => 5
host => "0.0.0.0"
mode => "server"
port => 4560
debug => true
type => "log4j"
}
...
}
You can send it directly to Elastic in this case. No reasons to go through LogStash first. You can easily use a filter to filter out messages you're not interested in.
I've written this appender here Log4J2 Elastic REST Appender if you want to use it. It has the ability to buffer log events based on time and/or number of events before sending it to Elastic (using the _bulk API so that it sends it all in one go).
It has been published to Maven Central so it's pretty straight forward.