I want to change log4j2 configuration of ElasticSearch in the following way. Logs from ElasticSearch should be saved in directories: /path/to/log/{year}/{month}/{day}/cluster_name.log but TimeBasedTriggeringPolicy makes rollover only after end of the day. I've been trying to use TimeBasedRollingPolicy but it can't be configured through *.properties file. I rewrited whole log4j2.properties to log4j2.xml file but ElasticSearch requires log4j2.properites. At the end I decided to resign from logging letter-day's logs to appropriate directory. I returned to TimeBasedTriggeringPolicy and I used this filePattern /path/to/log/%d{yyyy/MM/dd}/cluster_name.log but still doesn't work.
Larger part of config file:
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = /path/to/log/cluster_name.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = /path/to/log/%d{yyyy/MM/dd}/cluster_name.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
I think %d{yyyy/MM/dd} pattern will create directory having name like 2017/09/19 which is invalid directory name. That is why, it is not working.
Try below filePattern -
appender.rolling.filePattern = /path/to/log/$${date:yyyy}/$${date:MM}/$${date:dd}/cluster_name_%d{yyyy-MM-dd}.log
It will rotate log files like below -
/path/to/log/{year}/{month}/{day}/cluster_name_{date}.log
Giving date in file name is mandatory. Without this, it may not work.
Related
I have a hive UDF and I am trying to redirect logs from my UDF to hive-server2.log. I am using beeline for executing queries. And so I have modified following file.
/etc/hive/conf.dist/beeline-log4j2.properties
status = INFO
name = BeelineLog4j2
packages = org.apache.hadoop.hive.ql.log
# list of properties
property.hive.log.level = INFO
property.hive.root.logger = console
# list of all appenders
appenders = console
# console appender
appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n
# list of all loggers
loggers = HiveConnection
# HiveConnection logs useful info for dynamic service discovery
logger.HiveConnection.name = org.apache.hive.jdbc.HiveConnection
logger.HiveConnection.level = INFO
# root logger
rootLogger.level = ${sys:hive.log.level}
rootLogger.appenderRefs = root
rootLogger.appenderRef.root.ref = ${sys:hive.root.logger}
# loggr for UDF
log4j.logger.com.abc=DEBUG,console
I tried, redirecting logs to seperate file. however it's not working.
log4j.logger.com.abc=DEBUG, console, rollingFile
log4j.appender.rollingFile=org.apache.log4j.RollingFileAppender
log4j.appender.rollingFile.File=/tmp/bmo.log
log4j.appender.rollingFile.layout=org.apache.log4j.PatternLayout
log4j.appender.rollingFile.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %m%n
log4j.appender.rollingFile.MaxFileSize=10MB
log4j.appender.rollingFile.MaxBackupIndex=5
log4j.appender.rollingFile.append=true
I am facing two issues.
doesn't matter what level I set for my logger I only gets info logs. I not able to see debug logs.
If I use some aggregate function (i.e. min, max) with my UDF i am not able to see any logs. Where can I find logs? logs appears only if I use just udf on column.
I am using log4j2 in spring boot. would like to know every time i restart the program, all the logs are cleared. I want the logs not all to be cleared unless if it is 100 MB of size, then the top part of the logs to be automatically cleared until the size is less than or equal to 100 MB.
status = debug
name = PropertiesConfig
#Make sure to change log file path as per your need
property.logPath = C:\\Users\\jason\\Documents\\log\\
filters = threshold
filter.threshold.type = ThresholdFilter
filter.threshold.level = debug
appenders = rolling
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = ${logPath}app.log
appender.rolling.filePattern = debug-backup-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=100MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 20
loggers = rolling
#Make sure to change the package structure as per your application
logger.rolling.name = com.jason
logger.rolling.level = debug
logger.rolling.additivity = false
logger.rolling.appenderRef.rolling.ref = RollingFile
Your file pattern says you will roll over the file every second or when a file reaches 100MB and you will keep a maximum of 20 files per second.
That kind of seems at odds with asking to only keep a total of 100MB of files but to do what you are asking you would need to add a Delete action to the DefaultRolloverStrategy. Something like
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${logPath}
appender.rolling.strategy.action.maxdepth = 1
appender.rolling.strategy.action.condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.exceeds = 100MB
appender.rolling.strategy.action.PathConditions.type = IfFileName
appender.rolling.strategy.action.PathConditions.glob = debug-backup-*.log.gz
This will keep the newest log files until 100MB of space is used by files matching the pattern.
I have installed elasticsearch(6.6.0) and CentOS 7. I want to add somemore properties for rotating logs like if size is 50MB rotate and compress. But if i add any more configuration to /etc/elasticsearch/log4j2.properties file and restart the elasticsearch server, it fails.
My current log4j2.properties file:
status = error
# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}]
%marker%.-10000m%n
appender.rolling.filePattern =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-
%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
When i try to add, as it was given in elasticsearch documents this is how to add configurations,
appender.rolling.policies.size.size = 2MB
appender.rolling.strategy.action.condition.age = 3D
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.condition.type = IfFileName
It is failing with error :
Exception in thread "main" org.apache.logging.log4j.core.config.ConfigurationException: No type attribute provided for component size
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.createComponent(PropertiesConfigurationBuilder.java:333)
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.processRemainingProperties(PropertiesConfigurationBuilder.java:347)
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.createComponent(PropertiesConfigurationBuilder.java:336)
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.processRemainingProperties(PropertiesConfigurationBuilder.java:347)
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.createAppender(PropertiesConfigurationBuilder.java:224)
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.build(PropertiesConfigurationBuilder.java:157)
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:56)
at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:35)
at org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:244)
at org.elasticsearch.common.logging.LogConfigurator$1.visitFile(LogConfigurator.java:105)
at org.elasticsearch.common.logging.LogConfigurator$1.visitFile(LogConfigurator.java:101)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:101)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:84)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:316)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)
at org.elasticsearch.cli.Command.main(Command.java:88)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)
And i have a warning in /var/log/elasticsearch/elasticsearch_deprecation.log :
[2018-02-20T02:09:32,694][WARN ][o.e.d.e.NodeEnvironment ] ES has detected the [path.data] folder using the cluster name as a folder [/data/es], Elasticsearch 6.0 will not allow the cluster name as a folder within the data path
Can anyone please explain how to add the configuration to log4j2.properties file ?
As the logs states that you are missing type attribute for size configuration. You are also missing type attribute for RolloverStrategy.
Try below configuration -
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 2 MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basePath = ${sys:es.logs.base_path}${sys:file.separator}
appender.rolling.strategy.action.maxDepth = 1
appender.rolling.strategy.action.ifLastModified.type = IfLastModified
appender.rolling.strategy.action.ifLastModified.age = 3d
I am using below configuration taken from Elasticsearch doc. Instead of waiting for 7D or a day, how can I test this immediately?
Below is my log4j2.properties file
...
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false
...
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfLastModified
appender.rolling.strategy.action.condition.age = 1D
appender.rolling.strategy.action.PathConditions.type = IfFileName
appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-*
Note: I am using elasticsearch 5.0.1
Update: I do not want to wait for a day 1D to test if the log files are being deleted or not. How can I test with 10 minute or so to test this scenario? Something like rolling happens every 1 minute and deletion happens for logs older than 10 minutes.
Yes, there is a way.
Actually, I use a size triggering policy to force or cause a Deletion policy and so test if my log4j2.properties works or not.
This an example of our log4j2.properties file, I highlight in black the change.
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
**appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 100KB**
And then, I change Debug Logging Level on ElasticSearch.
PUT /_cluster/settings
{"transient":{"logger._root":"DEBUG"}}
In that way, I'm causing many logs and triggering the RollingFile Appender with its regarding actions.
So, you can check quickly your log4j2.properties file without to wait 24h.
When you want to stop your test, you must set the default value:
PUT /_cluster/settings
{"transient":{"logger._root":"ERROR"}}
Regards
I have several Java applications using Log4J that may or not be on the same host.
I've successfuly configured Flume and Log4J to persist a single file per day, but how can I make flume create separate directories based on host and application name?
This is my Flume Log4J Appender:
<Flume name="flumeLogger" compress="false" batchSize="1" type="persistent" dataDir="logs/flume">
<Agent host="myhost" port="41414"/>
<PatternLayout pattern="#### %d{ISO8601} %p ${env:COMPUTERNAME} my-app %c %m ####"/>
</Flume>
And this is my Flume Configuration:
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://localhost:9000/logs/%{host}
agent1.sinks.hdfs-sink1.hdfs.filePrefix = %y-%m-%d
agent1.sinks.hdfs-sink1.hdfs.round = true
agent1.sinks.hdfs-sink1.hdfs.roundValue = 1
rgent1.sinks.hdfs-sink1.hdfs.roundUnit = day
agent1.sinks.hdfs-sink1.hdfs.rollInterval = 0
agent1.sinks.hdfs-sink1.hdfs.rollFile = 0
agent1.sinks.hdfs-sink1.hdfs.rollSize = 0
agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
agent1.sinks.hdfs-sink1.hdfs.writeFormat = Text
agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
agent1.sinks.hdfs-sink1.hdfs.batchSize = 1
agent1.sinks.hdfs-sink1.hdfs.useLocalTimeStamp = true
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = hdfs-sink1
agent1.sinks.hdfs-sink1.channel = ch1
agent1.sources.avro-source1.channels = ch1
From the documentation:
%{host}: Substitute value of event header named “host”. Arbitrary header names are supported.
However all my logs are being written in /logs instead of /logs/myhost as specified on the hdfs.path property.
Additionally, how can I define arbitrary header names in the Log4J Appender?