Under " dropins folder in websphere there are more than one war files how to generate separate log files for each war" - websphere

When I run my server which is in websphere, it generates log file in "C:\Softwares\wlp\usr\servers\theory\logs\messages.logs".It has only theory.war in C:\Softwares\wlp\usr\servers\theory\dropins
When I place more than one war files such as theory.war, epic.war, success.war and execute the server it generates single log file in C:\Softwares\wlp\usr\servers\theory\logs\messages.logs
I have to generate separate log files for each war like :
C:\Softwares\wlp\usr\servers\theory\logs\theory.logs
C:\Softwares\wlp\usr\servers\theory\logs\epic.logs
C:\Softwares\wlp\usr\servers\theory\logs\success.logs
Im using util.logging package.

If you want to have separate logfiles you have to either write your custom FileHandler for JUL, or use 3rd party logging framework as Log4j for example.
You could also use binary logging , and then use binaryLog tool to filter messages for given app.
However, I'd STRONGLY recommend NOT doing all that, since in new apps that follow 12 factor principles should put all messages to standard out and standard error steams, what JUL by default does. This is used by containers, container orchestrators and logging infrastructure implemented there. Logging infa allows you to filter log messages based on various params without the need for separate log files. And if you have separate log files you will have to manually integrate that.

Related

From Websphere 6.1.35 to Websphere 8.5.5.17 - application startup impacted by filesystem scan

This is an old application and we have trouble to have the application changed.
During the deploy of a CRM like application, migrated from a 6.1.35 environment to a 8.5.5.17 websphere application server environment, the startup time of the application changed from 10 seconds to 12 minutes.
We made sono troubleshooting and we found that the problem is a remote filesystem which has 1.200.000 files. This remote filesystem is "mounted" in a path that is part of the WAR cell. This is done after the deploy of the application.
What make the application startup so slow is that on 8.5.5.17, all the files present in the path of the application are traversed (thus it tries to traverse the 1.200.000 file in the remote filesystem, which takes, as you may guess, 12 minutes...).
Anybody knows if this changed behaviour from WAS 6.1.x to WAS 8.5.5.x is intended?
Is there any workaround to block this?
WebSphere Application Server creates a list of files in the application when starting the application. It's a bit overzealous when creating the list in that it includes files that it does not need to look at during application start. For large applications, the file list can consume a lot of memory and can take a long time to generate. Since it is a Java EE application server, it should only be concerned with files in Java-EE-defined locations when starting the application. The list is not used during application run time. Therefore, the application server should only look in the directories:
/
META-INF/*
WEB-INF
WEB-INF/classes/*
WEB-INF/lib/*
APAR PH09294, included in 8.5.5.16 and 9.5.0.1, provides a way for you to limit the file list to the above locations. You can specify a setting either in the application or in the application server. (If you are using a version prior to 8.5.5.16 or 9.5.0.1, you should look at PM37942, which is a bit more cumbersome but still works.)
To enable the setting in the application, add the following to the MANIFEST.MF of either the EAR or the WAR. (Remember when editing MANIFEST.MF, obey the formatting rules. MANIFEST.MF must end with a blank line.) Adding the setting to a WAR limits the file list for just that WAR. Adding the setting to an EAR limits the file list for all WARs in that EAR.
IBM-Enable-File-List-Include-Filter: true
You can apply the setting to the server by setting a JVM custom property.
The following JVM custom property applies the setting to all applications on the server:
Property: org.eclipse.jst.j2ee.commonarchivecore.EnableFilesListIncludeFilter
Value: true
You can also apply the setting to all servers in a profile by adding the following line to the <WAS_PROFILE_HOME>/properties/amm.filter.properties file:
IBM-Enable-File-List-Include-Filter = true
Finally, to apply the setting to all profiles, add the above line to the <WAS_HOME>/properties/amm.filter.properties
Where should you set it? If you set it in the application, then wherever you deploy the application, the file list will be limited in scope. You do not depend on any setting in the application server. So for any large application, a developer should always include the setting in the application. An administrator might consider applying the setting to the entire WebSphere Application Server installation.

Logging for two different environment logs in to a single log file

I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.

Tomcat on windows log shipping to logstash

I am trying to configure log shipping/ consolidation using logstash. My tomcat servers run on Windows. I am running into a few problems with my configuration - Tomcat on windows, logging using log4j, redis consolidator/ elasticsearch/ logstash/ kibana running on a single linux server.
Fewer log shippers available on Windows. It looks like nxlog does not work with redis out of the box. So, I have reverted to using logstash to ship. I would like to learn what others prefer to use
Rather use custom appenders I would rather have tomcat use log4j to log to file and then feed the file as input to be shipped to Redis. I don't want to the log formats.
No json-event format for me - http://spredzy.wordpress.com/2013/03/02/monitor-your-cluster-of-tomcat-applications-with-logstash-and-kibana/. I can't seem to get the right file config in the shipper.conf
Any sample config for log4j files - fed to logstash via redis would help.
Thanks
I'm currently writing a Java library to send logs to Logstash using ZeroMQ (no central redis broker required).
Disclaimer: it's not quite perfect yet, but may be worth keeping an eye on.
https://github.com/stuart-warren/logit
You can setup the standard juli log configuration (or log4j if you are using that), plus with the tomcat-valve jar you can send access logs as well by configuring the server.xml.
It does however send it in json-event format by default.
I'm confused as to why you wouldn't want to save all the processing on the Logstash server? You can (and currently probably should) log to file in standard format as well.
logging.properties file.
# "handlers" specifies a comma separated list of log Handler
# classes. These handlers will be installed during VM startup.
# Note that these classes must be on the system classpath.
# By default we only configure a ConsoleHandler, which will only
# show messages at the INFO and above levels.
handlers= com.stuartwarren.logit.jul.ZmqAppender
# handlers= com.stuartwarren.logit.jul.ZmqAppender, java.util.logging.ConsoleHandler
# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers. For any given facility this global level
# can be overriden by a facility-specific level.
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.
.level=INFO
# Limit the messages that are printed on the console to INFO and above.
com.stuartwarren.logit.jul.ZmqAppender.level=INFO
com.stuartwarren.logit.jul.ZmqAppender.socketType=PUSHPULL
com.stuartwarren.logit.jul.ZmqAppender.endpoints=tcp://localhost:2120
com.stuartwarren.logit.jul.ZmqAppender.bindConnect=CONNECT
com.stuartwarren.logit.jul.ZmqAppender.linger=1000
com.stuartwarren.logit.jul.ZmqAppender.sendHWM=1000
com.stuartwarren.logit.jul.ZmqAppender.layout=com.stuartwarren.logit.jul.Layout
com.stuartwarren.logit.jul.Layout.layoutType=logstashv1
com.stuartwarren.logit.jul.Layout.detailThreshold=WARNING
com.stuartwarren.logit.jul.Layout.tags=tag1,tag2,tag3
com.stuartwarren.logit.jul.Layout.fields=field1:value1,field2:value2,field3:value3
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
server.xml
<Valve className="com.stuartwarren.logit.tomcatvalve.ZmqAppender"
layout="com.stuartwarren.logit.tomcatvalve.Layout"
socketType="PUSHPULL"
endpoints="tcp://localhost:2120"
bindConnect="CONNECT"
linger="1000"
sendHWM="1000"
layoutType="logstashv1"
iHeaders="Referer,User-Agent"
oHeaders=""
cookies=""
tags="tag1,tag2,tag3"
fields="field1:value1,field2:value2,field3:value3" />

Including a log4j.properties file in my jar, but different properties file at execution time (optionally)

I want to include a log4j.properties file in my maven build, but be able to use a different properties file at execution time (using cron on unix)
Any ideas?
You want to be able to change properties per environment.
There are number approach to address this issue.
Create directory in each environment which will contain different files (log4j.properties in your example). Add these directories to your classpath in each environment.
Use filter ability + profile ability of maven in order to populate log4j.properties with correct values in the build time.
Use build server (Jenkins, for example) which essentially will make p.2.
Each of these approaches has it's own drawbacks. I am currently using a bit weired combination of 2&3 because Jenkins limitations.

Read application log written on Windows Azure

I have 10 applications they have same logic to write the log on a text file located on the application root folder.
I have an application which reads the log files of all the applicaiton and shows details in a web page.
Can the same be achieved on Windows Azure? I don't want to use the 'DiagnosticMonitor' API's. As I cannot change logging logic of application.
Thanks,
Aman
Even if technically this is possible, this is not advisable as the Fabric Controller can re-create any role at a whim (well - with good reasons, but unpredictable none-the-less) and so whenever this happens you will lose any files stored locally on a role.
So - primarily you should be looking for a different place to store those logs, and there are many options, but all require that you change the logging logic of the application.
You could do this, but aside from the issue Yossi pointed out (the log would be ephemeral; it could get deleted at any time), you'd have a different log file on each role instance (VM). That means when you hit your web page to view the log, you'd see whatever happened to be on the log on that particular VM, instead of what you presumably want (a roll-up of the log files across all VMs).
Windows Azure Diagnostics could help, since you can configure it to copy log files off to blob storage (so no need to change the logging). But honestly I find Diagnostics a bit cumbersome for this. It will end up creating a lot of different blobs, and you'll have to change the log viewer to read all those blobs and combine them.
I personally would suggest writing a separate piece of code that monitors the log file and, for each new line, stores the line as an entity (row) in table storage. This bit of code could be launched as a startup task and just run continuously as a separate process (leaving everything else unchanged). Then modify the log viewer to read the last n entities from table storage and display them.
(I'm assuming you can modify the log viewer even if you can't modify the apps that log to the file.)
What about writing logs to something like azure storage table? Just need to define unique ParitionKey/RowKey, then you can easily retrieve the log for the web page.

Resources