This seems like an obvious request to me so I'm hoping others may have already solved this.
I have app jboss logs with lots & lots of errors. In order to manage and address these I'd like to figure out a way to track them. after looking at
How to retrieve unique count of a field using Kibana + Elastic Search
I'm thinking I can use a similar approach.
per es docs, it looks like facets have been replaced so I'm thinking I should dig into sum aggregation but not sure yet.
I'm still not sure of best way to further break down my jboss log records. the field I'm most interested in is message field which has date/time stamp, hostname in front of each record. what's the best approach to tackle this? break the message field down further--ignore first 2 elements then sort & count next section of this field? I may need to ignore some of the end of this record as well but will deal with that next...
I'm pretty new to ELK stack but excited about its possibilities.
Thx.
Joe
Logstash (part of E L K) comes up with a lots of filtering option. Most useful is Grok. It is best suited to parse the field from a long message in {key,value} pair.
Also, you can delete/ignore the particuler data from the message in Logstash through different kinds of plugins avliable.You can explore it in http://logstash.net/docs/1.4.2/.
After you send those data Elastic, you can use the power of Kibana to create a dashboard based on your requirment.
Hence, ELK is perfectly suites for the requirement you have.
The best and easiest way to get your JBOSS output into ELK is through a socket connector. There are lots of tutorials but it will automatically give you your message breakdown for free.
See this for an example: http://blog.akquinet.de/2015/08/24/logstash-jboss-eap/
Please note that personally I have had to change the appenders and use documentation to get the correct fields. If you are using 2.0 elasticsearch than update the configuration. For simple debugging simple output to stdout.
Once you have the socket appenders working correctly you are laughing and go to kibanan, configure the dashboard with whatever aggregation you would like. I would not recommend breaking it down further as then you have a custom message breakdown that will not apply to a jboss implementation, feel free to add additional value/pairs such as appname.. etc.
SAMPLE:
* jboss-eap-6.4.0.0
* elasticsearch-2.0.0-beta2
* kibana-4.2.0-beta2-windows
* logstash-2.0.0-beta1
Create a file called log4j.conf under logstash/conf dir, i.e. "C:_apps\logstash-2.0.0-beta1\conf\log4j.conf" with the below content.
input {
log4j {
mode => "server"
host => "0.0.0.0"
port => 4712
type => "log4j"
}
}
output {
elasticsearch {
hosts => "127.0.0.1"
#cluster => "myAppName"
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Run Logstash with the following command prompt within dir:
bin\logstash.bat -f conf\log4j.conf
Configuring Appenders:
JBOSS APPENDER
Within the profile:
<custom-handler name="Remotelog4j" class="org.apache.log4j.net.SocketAppender" module="org.apache.log4j">
<level name="INFO"/>
<properties>
<property name="RemoteHost" value="localhost"/>
<property name="Port" value="4712"/>
<!--property name="BufferSize" value="1000"/-->
<!--property name="Blocking" value="false"/-->
</properties>
</custom-handler>
within the root loggger configuration define your handlers:
<root-logger>
<level name="INFO"/>
<handlers>
<handler name="CONSOLE"/>
<handler name="FILE"/>
<handler name="Remotelog4j"/>
</handlers>
</root-logger>
Start JBOSS, note that your command prompt is printing out all the incoming messages from your standalone JBOSS instance.
Configuring Another Application with OLD Log4J
Log4J version log4j-1.2.15.jar
Inside the packaged WAR I created this simple additional log4j appender:
<appender name="log4jSocket" class="org.apache.log4j.net.SocketAppender" module="org.apache.log4j">
<level name="ERROR"/>
<param name="RemoteHost" value="localhost"/>
<param name="Port" value="4712"/>
<param name="threshold" value="ERROR" />
</appender>
Again, add the appender to your application log4j loggers.
<logger name="com.somepackage" additivity="false">
<level value="error"/>
<appender-ref ref="default"/>
<appender-ref ref="event"/>
<appender-ref ref="log4jSocket"/>
</logger>
Now restart your jboss configuration and deploy/start your application inside JBOSS. You will get both jboss output and application output inside of logstash value/paired nicely.
Related
I have a Spring Boot app (version 2.1.3.RELEASE) using Spring Cloud Sleuth and I would like to log the value of a baggage
in the current Span Context. I am using Logback.
My logger has this logback configuration:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%date{ISO8601} %highlight(%-5level) %magenta(%thread) %cyan(%logger) %message %X{X-B3-TraceId} %X{X-B3-SpanId} %X{foo}%n
</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="STDOUT" />
</root>
</configuration>
In my Controller, I am trying to set a baggage on the current Span Context:
Span currentSpan = this.tracer.currentSpan();
ExtraFieldPropagation.set(currentSpan.context(), "foo", "bar");
In my application.properties, I have set the following properties:
spring.sleuth.propagation-keys=foo
# Set the value of the foo baggage into MDC:
spring.sleuth.log.slf4j.whitelisted-mdc-keys=foo
But I am not able to log the value of foo (with %X{foo}).
The result is an empty string for foo:
My message e575e59578b92ace e575e59578b92ace
The whitelisted baggage values set in the current span are written in SLF4J MDC in org.springframework.cloud.sleuth.log.Slf4jScopeDecorator.decorateScope(), so if a new child span is created you will find the baggage value in the logs.
I am currently unaware of an elegant way of getting MDC updated as soon as a baggage value is set.
The only solution I have come up with is to manually set the value in MDC. In your case that would mean something like
org.slf4j.MDC.put("foo", "bar");
Hope this helps
I'm working on a big project with many submodules. When debugging component xyz, that component often accesses services in other modules. To log every debug message, we have to define many loggers in our logback.xml.
Is it possible to define overarching super loggers or parent loggers?
example:
instead of writing this:
<logger name="com.a.b.c.xyz" level="debug" />
<logger name="com.a.b.d.core.xyz" level="debug" />
<logger name="com.a.b.e.xyz" level="debug" />
<logger name="com.a.b.e.f.xyz" level="debug" />
<logger name="com.a.b.t.services.xyz" level="debug" />
It is possible to define something like this:
<logger name="xyz-super" level="debug">
<child-logger name="..." />
<child-logger name="..." />
...
</logger>
Once debugging module xyz is done, everyone forgetts which packages were relevant for it, so keeping these parent loggers would help with future problems.
If I understand what you're asking for, you have a concept of a "component" which crosses Java packages, and you want to handle setting the logging level on the basis of which component it's in, and not necessarily on which package it's in. I see a few approaches one could take.
While the standard for logger name is based on the class name (and thus the Java package that the class is in), you don't need to use that for your logger names. That is, you could have a logger hierarchy which is different from your package hierarchy. In your com.a.b.c.xyz class, you can get a logger with:
final Logger logger = LoggerFactory.getLogger("com.a.b.xyz.c");
While in your com.a.b.d.core.xyz class, get a logger with:
final Logger logger = LoggerFactory.getLogger("com.a.b.xyz.d.core");
And then you just can use normal logger level definitions, setting the logging level for com.a.b.xyz to get all the loggers underneath that component. It's a bit unconventional, and may confuse developers new to your project, but if you really want your logging hierarchy and your package hierarchy to be different, it's a thing you can do.
Another approach is to leave your logging name hierarchy as is, but use SLF4J/Logback's Marker mechanism to "mark" each log message for each component.
final Logger logger = LoggerFactory.getLogger(getClass());
final Marker xyzComponent = MarkerFactory.getMarker("XYZ");
…
logger.info(xyzComponent, "A log message");
You could then set up Filters in Logback based on the markers that you are looking for. It does require you to be consistent and make sure that every message that you care about is tagged with the appropriate Marker, but it's the closest thing to a "super logger" or "group logger" that the SLF4J architecture has. The Marker mechanism is really powerful and allows you to do just about anything with it, as long as your messages have the right Marker on them and you set up your logging configuration to filter to just the messages with the right filter.
The other approach I can think of is to basically keep on doing what you're doing now, and specifying a lot of separate loggers at debug level, but have this "debug" logging configuration settings for each component in separate files. Then, when you need to debug a component, you just need to add (or uncomment?) the appropriate include element in your main logging settings.
In file xyz-debug.xml:
<included>
<logger name="com.a.b.c.xyz" level="debug" />
<logger name="com.a.b.d.core.xyz" level="debug" />
<logger name="com.a.b.e.xyz" level="debug" />
<logger name="com.a.b.e.f.xyz" level="debug" />
<logger name="com.a.b.t.services.xyz" level="debug" />
</included>
In file abc-debug.xml:
<included>
<logger name="com.a.b.c.abc" level="debug" />
<logger name="com.a.b.d.core.abc" level="debug" />
<logger name="com.a.b.e.abc" level="debug" />
<logger name="com.a.b.e.f.abc" level="debug" />
<logger name="com.a.b.t.services.abc" level="debug" />
</included>
And then in your main logback.xml:
<!--<include file="xyz-debug.xml"/>-->
<!--<include file="abc-debug.xml"/>-->
And you just uncomment the appropriate line when you need to debug that component. Perhaps a little fiddly and simplistic, and may be really confusing if somebody forgets to update the xyz-debug.xml when the xyz component is part of a new package, but I can imagine this working well enough for some teams. It also doesn't require any code changes, which may be a plus.
Logback and SLF4J have a lot of power and possibilities, which is (as usual) both a strength and a weakness as it can take a while to learn about everything that they can do. But usually one can find a way to get them to work the way one wants, and sometimes one can find a way even better than what one had in mind.
I am working on a flow that will, when triggered by an HTTP request, download files from an FTP server. In order to do this on request, instead of on polling, I am using the Mule Requester.
I have found that without the requestor, FTP connector will set the "incomingFilename" on the inboundProperties collection for each of the files. When used with the Mule Requester, the filename property is not set, therefore I have no idea what file I am processing... or in this case saving to the file system. In the code below I am using the 'counter' variable for thefilename in the case the the filename doesn't come through.
Any idea how to fix this issue? Here is the flow below:
<ftp:connector name="FTPConfig" pollingFrequency="3000" validateConnections="true" doc:name="FTP"></ftp:connector>
<flow name="FileRequestFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="csvfilesready" allowedMethods="GET" doc:name="HTTP"></http:listener>
<mulerequester:request-collection config-ref="Mule_Requester" resource="ftp://username:pswd#127.0.0.1:21/Incoming?connector=FTPConfig" doc:name="Mule Requester"></mulerequester:request-collection>
<foreach collection="#[payload]" doc:name="For Each">
<set-variable variableName="thefilename" value="#[message.inboundProperties.originalFilename==null ? counter.toString()+'.csv' : message.inboundProperties.originalFilename] " doc:name="Variable"/>
<file:outbound-endpoint path="/IncomingComplete" outputPattern="#[flowVars.thefilename]" responseTimeout="10000" doc:name="File"></file:outbound-endpoint>
<logger message="#['File saved as: ' + payload]" level="INFO" doc:name="Logger"></logger>
</foreach>
<logger message="#[payload]" level="INFO" doc:name="Logger"></logger>
</flow>
UPDATE:
Below is an option for working with the requester, however, you can use the request-collection. The key is to realize that it will return a MuleMessageCollection and to use the Collection Splitter directly after the Requester, which will then returning individually the ftp file messages with the originalFilename.
After playing with this a while, I have found that with FTP in the mule requester you can get the filename only if you use it as a request, not request-collection.
Have not been able to get the request-collection to work if you need filenames associated.
So.... If you need multiple files, you need do something like loop on the requester until the payload is null.
If you have alternate methods please let me know.
I have written an alternate ftp-connector, it allows to issue a list-command in the flow, followed by a loop to read the files.
See: https://github.com/rbutenuth/ftp-client-connector/
Can I override a logging level for a specific class only using logback.xml? i.e everything remains in INFO, except for one class which will log in DEBUG.
I appended this after the default one, but does not seem to work
<logger name="com.pack1.pack2.paack3.ClassName" additivity="false" level="debug">
<appender-ref ref="file1"/>
</logger>
Thanks,
Donald
Doing it exactly like this works for me:
<logger name="org.apache.zookeeper" level="WARN" />
(In case you set the name of logger explicitly) check if the name of the logger matches the name you set for your logger in your source code.
Use the upper case letters for level keywords (DEBUG, INFO ....)
and I am not sure but maybe you should use Level instead of level.
I'm pretty sure I've done that before and it's worked. Try uppercase DEBUG.
I'm currently evaluating the Spring-db4o integration. I was impressed by the declarative transaction support as well as the ease to provide declarative configuration.
Unfortunately, I'm struggling to figure how to create an index on specific fields. Spring is preparing the db during the tomcat server startup. Here's my spring entry :
<bean id="objectContainer" class="org.springmodules.db4o.ObjectContainerFactoryBean">
<property name="configuration" ref="db4oConfiguration" />
<property name="databaseFile" value="/WEB-INF/repo/taxonomy.db4o" />
</bean>
<bean id="db4oConfiguration" class="org.springmodules.db4o.ConfigurationFactoryBean">
<property name="updateDepth" value="5" />
<property name="configurationCreationMode" value="NEW" />
</bean>
<bean id="db4otemplate" class="org.springmodules.db4o.Db4oTemplate">
<constructor-arg ref="objectContainer" />
</bean>
db4oConfiguration doesn't provide any means to specify the index. I wrote a simple ServiceServletListener to set the index. Here's the relevant code:
Db4o.configure().objectClass(com.test.Metadata.class).objectField("id").indexed(true);
Db4o.configure().objectClass(com.test.Metadata.class).objectField("value").indexed(true);
I inserted around 6000 rows in this table and then used a SODA query to retrieve a row based on the key. But the performance was pretty poor. To verify that indexes have been applied properly, I ran the following program:
private static void indexTest(ObjectContainer db){
for (StoredClass storedClass : db.ext().storedClasses()) {
for (StoredField field : storedClass.getStoredFields()) {
if(field.hasIndex()){
System.out.println("Field "+field.getName()+" is indexed! ");
}else{
System.out.println("Field "+field.getName()+" isn't indexed! ");
}
}
}
}
Unfortunately, the results show that no field is indexed.
On a similar context, in OME browser, I saw there's an option to create index on fields of each class. If I turn the index to true and save, it appears to be applying the change to db4o. But again, if run this sample test on the db4o file, it doesn't reveal any index.
Any pointers on this will be highly appreciated.
Unfortunately I don't know the spring extension for db4o that well.
However the Db4o.configure() stuff is deprecated and works differently than in earlier versions. In earlier versions there was a global db4o configuration. Not this configuration doesn't exist anymore. The Db4o.configure() call doesn't change the configuration for running object containers.
Now you could try to do this work around and a running container:
container.ext().configure().objectClass(com.test.Metadata.class).objectField("id").indexed(true);
This way you change the configuration of the running object container. Note that changing the configuration of a running object container can lead to dangerous side effect and should be only used as last resort.