I am sending logs from a spring boot app to rabbitMq but I can't find a way to send these logs in json format (because i want to get them in elasticsearch as documents)
My log4j.properties are:
log4j.rootLogger=INFO, consoleAppender, amqp
log4j.appender.consoleAppender=org.apache.log4j.ConsoleAppender
log4j.appender.consoleAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.consoleAppender.layout.ConversionPattern=%d %p %t [%c] <%m>%n
log4j.appender.amqp=org.springframework.amqp.rabbit.log4j.AmqpAppender
log4j.appender.amqp.host=localhost
log4j.appender.amqp.port=5672
log4j.appender.amqp.username=guest
log4j.appender.amqp.password=guest
log4j.appender.amqp.virtualHost=/
log4j.appender.amqp.exchangeName=logs_exchange_fin
log4j.appender.amqp.exchangeType=direct
log4j.appender.amqp.routingKeyPattern=service-2
log4j.appender.amqp.declareExchange=true
log4j.appender.amqp.durable=true
log4j.appender.amqp.autoDelete=false
log4j.appender.amqp.contentType=text/plain
log4j.appender.amqp.generateId=false
log4j.appender.amqp.deliveryMode=PERSISTENT
log4j.appender.amqp.layout=org.apache.log4j.PatternLayout
log4j.appender.amqp.layout.ConversionPattern=%d %p %t [%c] <%m>%n
The pattern of the logs is: %d %p %t [%c] <%m>%n
Related
I am putting below command.
java -jar -Dlogging.config=C:\Users\P2932832\BPradhan\order-batch\order-batch\config\logback.xml order-batch-0.0.1-SNAPSHOT.jar --spring.config.location=C:\Users\P2932832\BPradhan\order-batch\order-batch\src\main\resources\application.yml
Then I can see below logs. But, when it is time to write rest of the logs on logs/order-batch.log , it is not doing anything. But, if I use STS , it runs perfectly fine and writes log to order-batch.log. My users needs to run command line , so it will be helpful , if i can see logs in order-batch.log
09:43:47,566 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.spectrum.sci] to DEBUG
09:43:47,569 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating DEBUG level on Logger[com.spectrum.sci] onto the JUL framework
09:43:47,572 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [httpclient] to WARN
09:43:47,576 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating WARN level on Logger[httpclient] onto the JUL framework
09:43:47,576 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache] to WARN
09:43:47,576 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating WARN level on Logger[org.apache] onto the JUL framework
09:43:47,576 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework.context] to WARN
09:43:47,578 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating WARN level on Logger[org.springframework.context] onto the JUL framework
09:43:47,579 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework.core] to WARN
09:43:47,579 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating WARN level on Logger[org.springframework.core] onto the JUL framework
09:43:47,580 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework.beans] to WARN
09:43:47,580 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating WARN level on Logger[org.springframework.beans] onto the JUL framework
09:43:47,581 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework.web] to WARN
09:43:47,581 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating WARN level on Logger[org.springframework.web] onto the JUL framework
09:43:47,581 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework.security] to DEBUG
09:43:47,583 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating DEBUG level on Logger[org.springframework.security] onto the JUL framework
09:43:47,585 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
09:43:47,591 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
09:43:47,637 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy#787387795 - No compression will be used
09:43:47,645 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy#787387795 - Will use the pattern logs/order-batch.%d{yyyy-MM-dd}.%i.log for the active file
09:43:47,651 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP#7907ec20 - The date pattern is 'yyyy-MM-dd' from file name pattern 'logs/order-batch.%d{yyyy-MM-dd}.%i.log'.
09:43:47,655 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP#7907ec20 - Roll-over at midnight.
09:43:47,665 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP#7907ec20 - Setting initial period to Fri Apr 10 09:38:51 MDT 2020
09:43:47,666 |-WARN in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP#7907ec20 - SizeAndTimeBasedFNATP is deprecated. Use SizeAndTimeBasedRollingPolicy instead
09:43:47,669 |-WARN in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP#7907ec20 - For more information see http://logback.qos.ch/manual/appenders.html#SizeAndTimeBasedRollingPolicy
09:43:47,673 |-WARN in Logger[org.hibernate.validator.messageinterpolation.ResourceBundleMessageInterpolator] - No appenders present in context [default] for logger [org.hibernate.validator.messageinterpolation.ResourceBundleMessageInterpolator].
09:43:47,675 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
09:43:47,705 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - Active log file name: logs/order-batch.log
09:43:47,707 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - File property is set to [logs/order-batch.log]
09:43:47,720 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
09:43:47,721 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator#71f2a7d5 - Propagating INFO level on Logger[ROOT] onto the JUL framework
09:43:47,735 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to Logger[ROOT]
09:43:47,737 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
09:43:47,767 |-INFO in org.springframework.boot.logging.logback.SpringBootJoranConfigurator#2aaf7cc2 - Registering current configuration as safe fallback point
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.6.RELEASE)
Here is my application.yml
server:
port: 8080
osm.service.url: http://localhost:8090/order-manager/order
osm.service.username: TOS_Automation
osm.service.password: TOS_Automation$123
logging.config: config/logback.xml
Here is logback.xml
<configuration debug="true">
<!-- logger name="com.spectrum.sci" level="${log.level}" /-->
<logger name="com.spectrum.sci" level="INFO" />
<logger name="httpclient" level="WARN" />
<logger name="org.apache" level="WARN" />
<logger name="org.springframework.context" level="WARN" />
<logger name="org.springframework.core" level="WARN" />
<logger name="org.springframework.beans" level="WARN" />
<logger name="org.springframework.web" level="WARN" />
<logger name="org.springframework.batch" level="DEBUG" />
<logger name="org.springframework.security" level="DEBUG" />
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/order-batch.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/order-batch.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<maxHistory>10</maxHistory>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>5MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<encoder>
<pattern>[%d{YYYY-MM-dd HH:mm:ss.SSS}] [%level] [Context:%logger{0}] [%X] [%msg]%n</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="FILE" />
</root>
</configuration>
Instead of passing the log config file in the command using -Dlogging.config=..., you need to add a property in your application.yml pointing to the log config file. So your command should be something like:
java -jar order-batch-0.0.1-SNAPSHOT.jar --spring.config.location=C:\Users\P2932832\BPradhan\order-batch\order-batch\src\main\resources\application.yml
where application.yml contains:
logging
config: "C:\Users\P2932832\BPradhan\order-batch\order-batch\config\logback.xml"
Please make sure to correctly indent this yaml snippet and check its syntax if you copy/paste. You can find more details in the Custom Log Configuration section of the docs.
Here is my script. I want to replace text in multiple dest. How I can use wildcard in (dest=/home/*/conf/server.xml).
- hosts: 192.168.8.11
user: mohitmehral
sudo: yes
tasks:
- replace:
dest=/home/5/conf/server.xml
#dest=/home/1/conf/server.xml
#dest=/home/2/conf/server.xml
#dest=/home/3/conf/server.xml
#dest=/home/4/conf/server.xml
#dest=/home/5/conf/server.xml
regexp='pattern="%{X-Forwarded-For}i %h %t %a %p %v %q "%{Referer}i" %m "%U" "%S" "%{User-agent}i" %b %s %D"/>'
replace='pattern="%{X-Forwarded-For}i %h %t %a %p %v %q"%{Referer}i" %m "%U" "%{User-agent}i" "%b" "%s" "%D""/>'
backup=yes
If the regex and replace pattern are same, then you can do like this:
- hosts: 192.168.8.11
user: mohitmehral
sudo: yes
tasks:
- replace:
dest="/home/{{ item }}/conf/server.xml"
regexp='pattern="%{X-Forwarded-For}i %h %t %a %p %v %q "%{Referer}i" %m "%U" "%S" "%{User-agent}i" %b %s %D"/>'
replace='pattern="%{X-Forwarded-For}i %h %t %a %p %v %q"%{Referer}i" %m "%U" "%{User-agent}i" "%b" "%s" "%D""/>'
backup=yes
with_items: [1,2,3,4,5]
I am currently using goaccess-1.0.2. I have installed it on an Amazon Linux box. The box which it resides has customized logs that were forwarded from an Apache WebApp Server. What I have tried to accomplish but can't seem to figure out is how to get GoAccess to parse our customized log.
Here is an example of the custom forwarded WebApp Log entry:
Jun 24 00:00:41 directory1 httpd-access: 55.117.170.95 www.URLaddress.com - [24/Jun/2016:00:00:41 -0700] "GET /sites/all/themes/somthing_on_demand/js/fancybox/jquery.fancybox-1.3.4.css HTTP/1.1" 304 - "ht
tps://www.IPaddress.com/my_account/yum" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" "SESSb9948a0b21e4d377a7d82f6adbf86c91=l
on7pgjlikml7q4tq954ejiao1; cookie_js=1; __utma=23285183.1119616966.1452095139.1468883973.1468963151.39; __utmb=23285183.500.10.1468963151; __utmc=23285183; __utmz=23285183.1468963151.39.39.utmcsr=fyi.URLaddress.com|utm
ccn=(r/INFOSEC-MAXLEN-256" "-" 57630
Here are a few log-formats I have tried:
log-format %^ %^ %^ "%h %^ %u %t \"%r\" %>s %b \"%R\" \"%u\" \"%^\" \"%^\" %D"
log-format "%h %{Host}i %{SSL_CLIENT_S_DN_CN}x %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{SHORT_COOKIE}e\" \"%{X-Forwarded-For}i\" %D"
log-format "%h %{Host}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{SHORT_COOKIE}e\" \"%{X-Forwarded-For}i\" %D"
I thought I would ignore the date and time format using %^ then use date format %m %d and time format %T .
I am very new at this and could really use help. Thank you for your feedback in advance.
Please try this, it works for me:
goaccess -f access.log --log-format='%^:%^:%^: %h %v %^[%d:%t %^] "%r" %s %b "%R" "%u" "%^" "%^" %D' --date-format='%d/%b/%Y' --time-format='%T'
Based on the following configuration i am expecting my log4j should write to HDFS folder (/myfolder/mysubfolder). But it's not even creating a file with the given name hadoop9.log. I tried by creating hadoop9.log manually on hdfs. Still it didn't work.
Am i missing anything in log4j.properties.?
# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console,RFA,DRFA
hadoop.log.dir= /myfolder/mysubfolder
hadoop.log.file=hadoop9.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=ALL
# Null Appender
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
#
# Rolling File Appender - cap space usage at 5gb.
#
hadoop.log.maxfilesize=256MB
hadoop.log.maxbackupindex=20
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# Daily Rolling File Appender
#
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
#
# TaskLog Appender
#
#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#
# HDFS block state change log from block manager
#
# Uncomment the following to suppress normal block state change
# messages from BlockManager in NameNode.
#log4j.logger.BlockStateChange=WARN
#
#Security appender
#
hadoop.security.logger=INFO,NullAppender
hadoop.security.log.maxfilesize=256MB
hadoop.security.log.maxbackupindex=20
log4j.category.SecurityLogger=${hadoop.security.logger}
hadoop.security.log.file=SecurityAuth-${user.name}.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
#
# Daily Rolling Security appender
#
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
#
# hadoop configuration logging
#
# Uncomment the following line to turn off configuration deprecation warnings.
# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN
#
# hdfs audit logging
#
hdfs.audit.logger=INFO,NullAppender
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
#
# mapred audit logging
#
mapred.audit.logger=INFO,NullAppender
mapred.audit.log.maxfilesize=256MB
mapred.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
# Custom Logging levels
#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG
# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file :
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
#
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
hadoop.mapreduce.jobsummary.log.maxfilesize=256MB
hadoop.mapreduce.jobsummary.log.maxbackupindex=20
log4j.appender.JSA=org.apache.log4j.RollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize}
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false
#
# Yarn ResourceManager Application Summary Log
#
# Set the ResourceManager summary log filename
yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log
# Set the ResourceManager summary log level and appender
yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}
#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY
# To enable AppSummaryLogging for the RM,
# set yarn.server.resourcemanager.appsummary.logger to
# <LEVEL>,RMSUMMARY in hadoop-env.sh
# Appender for ResourceManager Application Summary Log
# Requires the following properties to be set
# - hadoop.log.dir (Hadoop Log directory)
# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)
log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.MaxFileSize=256MB
log4j.appender.RMSUMMARY.MaxBackupIndex=20
log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
# HS audit log configs
#mapreduce.hs.audit.logger=INFO,HSAUDIT
#log4j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=${mapreduce.hs.audit.logger}
#log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=false
#log4j.appender.HSAUDIT=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.HSAUDIT.File=${hadoop.log.dir}/hs-audit.log
#log4j.appender.HSAUDIT.layout=org.apache.log4j.PatternLayout
#log4j.appender.HSAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
#log4j.appender.HSAUDIT.DatePattern=.yyyy-MM-dd
# Http Server Request Logs
#log4j.logger.http.requests.namenode=INFO,namenoderequestlog
#log4j.appender.namenoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.namenoderequestlog.Filename=${hadoop.log.dir}/jetty-namenode-yyyy_mm_dd.log
#log4j.appender.namenoderequestlog.RetainDays=3
#log4j.logger.http.requests.datanode=INFO,datanoderequestlog
#log4j.appender.datanoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.datanoderequestlog.Filename=${hadoop.log.dir}/jetty-datanode-yyyy_mm_dd.log
#log4j.appender.datanoderequestlog.RetainDays=3
#log4j.logger.http.requests.resourcemanager=INFO,resourcemanagerrequestlog
#log4j.appender.resourcemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.resourcemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-resourcemanager-yyyy_mm_dd.log
#log4j.appender.resourcemanagerrequestlog.RetainDays=3
#log4j.logger.http.requests.jobhistory=INFO,jobhistoryrequestlog
#log4j.appender.jobhistoryrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.jobhistoryrequestlog.Filename=${hadoop.log.dir}/jetty-jobhistory-yyyy_mm_dd.log
#log4j.appender.jobhistoryrequestlog.RetainDays=3
#log4j.logger.http.requests.nodemanager=INFO,nodemanagerrequestlog
#log4j.appender.nodemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.nodemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-nodemanager-yyyy_mm_dd.log
#log4j.appender.nodemanagerrequestlog.RetainDays=3
log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.com.mapr.util.zookeeper=WARN
log4j.logger.org.apache.hadoop.yarn.client.MapRZKBasedRMFailoverProxyProvider=WARN
The RollingFileAppender will only write to local disk. Unless you can somehow mount your HDFS so it "looks like local disk to your OS" it won't work. You have to choose another Log4j Appender type that supports remote logging such as the Flume Appender or roll your own.
Since with 'Daily Rolling File Appender', we are not able to give MaxBackupIndex, i want to use 'Rolling File Appender' for Job Tracker and Task Tracker daemon logs. How can i do that? Following is the log4j i am using. (I am using hadoop cdh4beta2)
# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=ALL
# Null Appender
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
#
# Rolling File Appender - cap space usage at 5gb.
#
hadoop.log.maxfilesize=100MB
hadoop.log.maxbackupindex=3
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# Daily Rolling File Appender
#
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
# 30-day backup
log4j.appender.DRFA.MaxBackupIndex=3
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
#
# TaskLog Appender
#
#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#
#Security appender
#
hadoop.security.logger=INFO,console
hadoop.security.log.maxfilesize=100MB
hadoop.security.log.maxbackupindex=3
log4j.category.SecurityLogger=${hadoop.security.logger}
hadoop.security.log.file=SecurityAuth.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
#
# Daily Rolling Security appender
#
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
#
# hdfs audit logging
#
hdfs.audit.logger=ERROR,NullAppender
hdfs.audit.log.maxfilesize=100MB
hdfs.audit.log.maxbackupindex=3
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
#
# mapred audit logging
#
mapred.audit.logger=INFO,console
mapred.audit.log.maxfilesize=100MB
mapred.audit.log.maxbackupindex=3
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
# Custom Logging levels
#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG
# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file :
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
#
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
hadoop.mapreduce.jobsummary.log.maxfilesize=100MB
hadoop.mapreduce.jobsummary.log.maxbackupindex=3
log4j.appender.JSA=org.apache.log4j.RollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize}
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false