I installed ELK on a ubuntu server 14.04. And now I wanted to send to this all my jboss sever logs (using log4j).
logstash configuration :
input conf file :
input {
log4j {
type => "log4j"
port => 5000
}
}
filter conf file :
filter {
if [type] == "log4j" {
grok {
match => {"message" => MY_GROK_PARSE}
}
}
}
and the output file :
output {
elasticsearch {
embedded => true
}
}
And to finish the log4j appender:
<appender name="LOGSTASH" class="org.apache.log4j.net.SocketAppender">
<param name="Port" value="5000"/>
<param name="RemoteHost" value="XXX.XXX.XXX.XXX"/> <!-- There is a real adress here ;-) -->
<param name="ReconnectionDelay" value="50000"/>
<param name="LocationInfo" value="true"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %-5p [%c{1}] %m%n" />
</layout>
</appender>
But nothing happens with this configuration. So I don't know what I misunderstand.
My other appenders (console and local file) work fine.
The elasticsearch log show any information/activity.
Edit :
More about my jboss-log4j.xml:
<appender name="Async" class="org.apache.log4j.AsyncAppender">
<appender-ref ref="FILE" />
<appender-ref ref="CONSOLE" />
<appender-ref ref="LOGSTASH" />
</appender>
<root>
<priority value="INFO" />
<appender-ref ref="Async" />
</root>
I know it's an old post, but someone may find it useful - log4j SocketAppender can't use layout, see docs for SocketAppender
SocketAppenders do not use a layout. They ship a serialized LoggingEvent object to the server side.
You also don't need additional filter in logstash configuration. Logstash log4j plugin minimal configuration is sufficient
input {
log4j {
data_timeout => 5
host => "0.0.0.0"
mode => "server"
port => 4560
debug => true
type => "log4j"
}
...
}
You can send it directly to Elastic in this case. No reasons to go through LogStash first. You can easily use a filter to filter out messages you're not interested in.
I've written this appender here Log4J2 Elastic REST Appender if you want to use it. It has the ability to buffer log events based on time and/or number of events before sending it to Elastic (using the _bulk API so that it sends it all in one go).
It has been published to Maven Central so it's pretty straight forward.
Related
I am wondering how to enable logging of sql in latest Spring Boot 3.0.
I have a create a Pojo along with its Repository(extends CrudRepository). I am using logback.xml as my logger for the project.
I can see the logs and sql generated in console as below :
M 15-Jan-2023 21:08:25.895 [main] DEBUG [org.springframework.jdbc.core.JdbcTemplate:711] - Executing prepared SQL query
M 15-Jan-2023 21:08:25.896 [main] DEBUG [org.springframework.jdbc.core.JdbcTemplate:643] - Executing prepared SQL statement [SELECT "app_user"."id" AS "id", "app_user"."username" AS "username", "app_user"."password" AS "password", "app_user"."created_at" AS "created_at", "app_user"."updated_at" AS "updated_at", "userProfile"."id" AS "userprofile_id", "userProfile"."dob" AS "userprofile_dob", "userProfile"."user_id" AS "userprofile_user_id", "userProfile"."gender" AS "userprofile_gender", "userProfile"."last_name" AS "userprofile_last_name", "userProfile"."created_at" AS "userprofile_created_at", "userProfile"."updated_at" AS "userprofile_updated_at", "userProfile"."first_name" AS "userprofile_first_name", "userProfile"."is_enabled" AS "userprofile_is_enabled", "userProfile"."profile_image" AS "userprofile_profile_image", "userProfile_profileDBFile"."id" AS "userprofile_profiledbfile_id", "userProfile_profileDBFile"."user_id" AS "userprofile_profiledbfile_user_id", "userProfile_profileDBFile"."file_name" AS "userprofile_profiledbfile_file_name", "userProfile_profileDBFile"."file_size" AS "userprofile_profiledbfile_file_size", "userProfile_profileDBFile"."file_type" AS "userprofile_profiledbfile_file_type", "userProfile_profileDBFile"."created_at" AS "userprofile_profiledbfile_created_at", "userProfile_profileDBFile"."updated_at" AS "userprofile_profiledbfile_updated_at", "userProfile_profileDBFile"."file_content" AS "userprofile_profiledbfile_file_content" FROM "app_user" LEFT OUTER JOIN "user_profile" "userProfile" ON "userProfile"."user_id" = "app_user"."id" LEFT OUTER JOIN "profile_db_file" "userProfile_profileDBFile" ON "userProfile_profileDBFile"."user_id" = "userProfile"."id" WHERE "app_user"."id" = ?]
M 15-Jan-2023 21:08:25.918 [main] DEBUG [org.springframework.jdbc.support.SQLStateSQLExceptionTranslator:96] - Extracted SQL state class '42' from value '42703'
M 15-Jan-2023 21:08:25.920 [main] DEBUG [org.springframework.jdbc.support.JdbcTransactionManager:833] - Initiating transaction rollback
M 15-Jan-2023 21:08:25.921 [main] DEBUG [org.springframework.jdbc.support.JdbcTransactionManager:345] - Rolling back JDBC transaction on Connection [HikariProxyConnection#1452327592 wrapping org.postgresql.jdbc.PgConnection#20c283b4]
M 15-Jan-2023 21:08:25.922 [main] DEBUG [org.springframework.jdbc.datasource.DataSourceUtils:251] - Resetting read-only flag of JDBC Connection [HikariProxyConnection#1452327592 wrapping org.postgresql.jdbc.PgConnection#20c283b4]
M 15-Jan-2023 21:08:25.923 [main] DEBUG [org.springframework.jdbc.support.JdbcTransactionManager:389] - Releasing JDBC Connection [HikariProxyConnection#1452327592 wrapping org.postgresql.jdbc.PgConnection#20c283b4] after transaction
I know there are certain columns missing in database but my console is not showing any error related to sql.
I have added the below configuration in logback.xml:
<logger name="org.springframework.jdbc" level="trace"
additivity="false">
<appender-ref ref="RollingFile" />
<appender-ref ref="Console" />
</logger>
<logger name="org.springframework.JdbcTemplate" level="all"
additivity="false">
<appender-ref ref="RollingFile" />
<appender-ref ref="Console" />
</logger>
<logger name="org.hibernate" level="all"
additivity="false">
<appender-ref ref="RollingFile" />
<appender-ref ref="Console" />
</logger>
but still cannot see any error. Does spring boot doesn't provide any sql error or there is something configuration missing from my end. Please let me know what to do here.
I'm currently getting reacquainted with log4Net configuration after a few years of it "just working", and I'm running into an issue while working in Visual Studio 2019 with ASP.NET Core. Any help figuring this out would be appreciated.
I have the following filter and level settings, but my log file STILL ends up with all the "Cahtter" that goes in the debug window of the IDE. As you can see, I've set the level to "INFO" and above, and also tried to filter out "noise" containing "Microsoft.AspNetCore" but no luck. I've researched how to do filtering and THINK I've done it right, but it's not filtering ANYTHING.
<?xml version="1.0" encoding="utf-8" ?>
<log4net>
<appender name="DebugAppender" type="log4net.Appender.DebugAppender" >
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger - %message%newline" />
</layout>
</appender>
<appender name="RollingFile" type="log4net.Appender.RollingFileAppender">
<file value="SDPerf.log" />
<appendToFile value="true" />
<maximumFileSize value="100KB" />
<maxSizeRollBackups value="2" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %5level %logger.%method [%line] - MESSAGE: %message%newline %exception" />
</layout>
<filter type="log4net.Filter.StringMatchFilter">
<stringToMatch value="Microsoft.AspNetCore"/>
<acceptOnMatch value="false"/>
</filter>
</appender>
<root>
<level value="INFO"/>
<appender-ref ref="DebugAppender" />
<appender-ref ref="RollingFile" />
</root>
</log4net>
and here's a snip from the log file:
2021-05-21 15:50:01,744 INFO Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.? [?] - MESSAGE: Executing action method LongRunningTask.Controllers.HomeController.Test2 (LongRunningTask) - Validation state: Valid
2021-05-21 15:50:01,754 INFO Microsoft.AspNetCore.Mvc.ViewFeatures.PartialViewResultExecutor.? [?] - MESSAGE: Executing PartialViewResult, running view Jobs.
2021-05-21 15:50:01,762 INFO Microsoft.AspNetCore.Mvc.ViewFeatures.PartialViewResultExecutor.? [?] - MESSAGE: Executed PartialViewResult - view Jobs executed in 7.4295ms.
2021-05-21 15:50:01,768 INFO Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.? [?] - MESSAGE: Executed action LongRunningTask.Controllers.HomeController.Test2 (LongRunningTask) in 43.365ms
2021-05-21 15:50:01,772 INFO Microsoft.AspNetCore.Routing.EndpointMiddleware.? [?] - MESSAGE: Executed endpoint 'LongRunningTask.Controllers.HomeController.Test2 (LongRunningTask)'
2021-05-21 15:50:01,776 INFO Microsoft.AspNetCore.Hosting.Diagnostics.? [?] - MESSAGE: Request finished in 71.9539ms 200 text/html; charset=utf-8
2021-05-21 15:50:03,561 INFO Microsoft.AspNetCore.Hosting.Diagnostics.? [?] - MESSAGE: Request starting HTTP/2.0 GET https://localhost:44345/Home/ProcessJob
2021-05-21 15:50:03,567 INFO Microsoft.AspNetCore.Routing.EndpointMiddleware.? [?] - MESSAGE: Executing endpoint 'LongRunningTask.Controllers.HomeController.ProcessJob (LongRunningTask)'
2021-05-21 15:50:03,588 INFO Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.? [?] - MESSAGE: Executing action method LongRunningTask.Controllers.HomeController.ProcessJob (LongRunningTask) - Validation state: Valid
2021-05-21 15:50:03,592 INFO SDPerf..ctor [24] - MESSAGE: Starting Task calls
2021-05-21 15:50:03,602 INFO SDPerf.NewMethod [51] - MESSAGE: Starting iterations for method '2'
2021-05-21 15:50:03,606 INFO SDPerf.NewMethod [51] - MESSAGE: Starting iterations for method '3'
2021-05-21 15:50:03,612 INFO SDPerf.NewMethod [51] - MESSAGE: Starting iterations for method '1'
2021-05-21 15:50:03,619 INFO Microsoft.AspNetCore.Mvc.RedirectToActionResult.? [?] - MESSAGE: Executing RedirectResult, redirecting to /Home/ShowJobProgress?requestId=7b1a8cf8-641f-4599-a90f-748707f4ed63.
2021-05-21 15:50:03,621 INFO Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.? [?] - MESSAGE: Executed action LongRunningTask.Controllers.HomeController.ProcessJob (LongRunningTask) in 44.8757ms
2021-05-21 15:50:03,624 INFO Microsoft.AspNetCore.Routing.EndpointMiddleware.? [?] - MESSAGE: Executed endpoint 'LongRunningTask.Controllers.HomeController.ProcessJob (LongRunningTask)'
2021-05-21 15:50:03,627 INFO Microsoft.AspNetCore.Hosting.Diagnostics.? [
Finally solved this by trial and error. While I couldn't find ANY example to help me figure it out, I discovered that the "StringMatchFilter" method only works on the original text of the message, NOT the entire line being logged.
I installed ELK stack on my windows 10 machine. I used log4net to push logs to logstash -> elasticsearch. The logs data is displayed in Kibana and everything is fine. My logstach config is:
input {
udp {
port => 5960
codec => multiline {
charset => "UTF-8"
pattern => "^(DEBUG|WARN|ERROR|INFO|FATAL)"
negate => true
what => previous
}
type => "log4net"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "myindex"
}
}
When I try to search for a keyword that exists in the message text (using the search input and the date is set to 3 months ago) I get:
"No results match your search criteria"
Note: If I use: stdin {} instead of udp{} in logstash config I can search for any keyword.
I reinstalled the stack in another machine and the same issue happened.
Any suggestions?
I found the solution:
The problem was with the encoding data coming from log4net. So you need to set the log4net config file as following for udp appender:
<appender name="UdpAppender" type="log4net.Appender.UdpAppender">
<remoteAddress value="127.0.0.1" />
<remotePort value="5960" />
<encoding value="UTF-8" />
<layout type="log4net.Layout.PatternLayout, log4net">
<conversionPattern value="%-5level %date [%-5.5thread] %-40.40logger - %message%newline" />
</layout>
So, I'm building a full cloud solution using kubernetes and spring boot.
My spring boot application is deployed to a container and logs directly on the console.
As containers are ephemerals I'd like to send logs also to a remote logstash server, so that they can be processed and sent to elastic.
Normally I would install a filebeat on the server hosting my application, and I could, but isn't there any builtin method allowing me to avoid writing my log on a file before sending it?
Currently I'm using log4j but I see no problem in switching to another logger as long it has a "logbackappender".
You can try to add logback.xml in resources folder :
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration scan="true">
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<param name="Encoding" value="UTF-8"/>
<remoteHost>localhost</remoteHost>
<port>5000</port>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"app_name":"YourApp", "app_port": "YourPort"}</customFields>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="logstash"/>
</root>
</configuration>
Then add logstash encoder dependency :
pom.xml
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.11</version>
</dependency>
logstash.conf
input {
udp {
port => "5000"
type => syslog
codec => json
}
tcp {
port => "5000"
type => syslog
codec => json_lines
}
http {
port => "5001"
codec => "json"
}
}
filter {
if [type] == "syslog" {
mutate {
add_field => { "instance_name" => "%{app_name}-%{host}:%{app_port}" }
}
}
}
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
index => "logs-%{+YYYY.MM.dd}"
}
}
I've just created a full working example in my repository
Hope to be helpful for someone
I am simulating a time-out exception (or any other for that sake) when calling external vendor web service. I store the original payload at the start of flow and in catch-exception-strategy, replace the payload with original payload. Payload gets replaced fine, apply XSLT transformer fine, copy the transformed payload to file outbound fine but the strange thing is JMS gets empty message (though message properties are intact). Using Mule 3.3.1 and ActiveMQ. Result is same if Active MQ is replaced by Sonic
<flow name="orderRequirementsToVendor">
<jms:inbound-endpoint queue="order.vendor" />
<set-variable variableName="originalPayload" value="#[message.payload]" />
<outbound-endpoint address="${vendor.ws.url}" mimeType="text/xml" connector-ref="https.connector" responseTimeout="100">
<cxf:proxy-client payload="body" enableMuleSoapHeaders="false">
<cxf:inInterceptors>
<spring:bean class="org.apache.cxf.interceptor.LoggingInInterceptor" />
</cxf:inInterceptors>
<cxf:outInterceptors>
<spring:bean class="org.apache.cxf.interceptor.LoggingOutInterceptor" />
</cxf:outInterceptors>
</cxf:proxy-client>
</outbound-endpoint>
.
.
.
<choice-exception-strategy>
<catch-exception-strategy>
<logger message="In catch exception strategy. Unknown Exception" level="ERROR" />
<logger message="#[exception.causeException]" level="ERROR" />
<set-payload value="#[flowVars.originalPayload]"/>
<set-property propertyName="exception" value="#[exception.summaryMessage]"/>
<transformer ref="generalErrorTransformer" />
<file:outbound-endpoint path="/outbound/vendor/error/ack-before" outputPattern="out_vendor_transformed_ack_error_beforeStatusQueue[function:dateStamp].xml" />
<jms:outbound-endpoint queue="order.status" />
<file:outbound-endpoint path="/outbound/vendor/error/ack-after" outputPattern="out_vendor_transformed_ack_error_afterStatusQueue[function:dateStamp].xml" />
</catch-exception-strategy>
</choice-exception-strategy>
</flow>
Both ack-before and ack-after has files with correct payload in them but message received in order.status is empty
EDIT:
This is the exception in my logs
2013-05-29 09:12:33,864 ERROR [orderRequirementsToVendor.stage1.02] exception.AbstractExceptionListener (AbstractExceptionListener.java:299) -
********************************************************************************
Message : COULD_NOT_READ_XML_STREAM. Failed to route event via endpoint: org.mule.module.cxf.CxfOutboundMessageProcessor. Message payload is of type: PostMethod
Code : MULE_ERROR-42999
--------------------------------------------------------------------------------
Exception stack is:
1. Read timed out (java.net.SocketTimeoutException)
java.net.SocketInputStream:-2 (null)
2. Read timed out (com.ctc.wstx.exc.WstxIOException)
com.ctc.wstx.sw.BaseNsStreamWriter:617 (null)
3. COULD_NOT_READ_XML_STREAM (org.apache.cxf.interceptor.Fault)
org.apache.cxf.databinding.stax.StaxDataBinding$XMLStreamDataWriter:133 (null)
4. COULD_NOT_READ_XML_STREAM. Failed to route event via endpoint: org.mule.module.cxf.CxfOutboundMessageProcessor. Message payload is of type: PostMethod (org.mule.api.transport.DispatchException)
org.mule.module.cxf.CxfOutboundMessageProcessor:144 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************