How can I configure multiple Logger in YAML File - spring

I am unable to configure multiple loggers in my YAML file. The last logger is overriding the previous loggers.
Here is my code
Loggers:
Logger:
- name: com.example
additivity: false
level: info
AppenderRef:
- ref: RollingFileAppender_Normal
level: info
- name: com.example
additivity: false
level: info
AppenderRef:
- ref: RollingFileAppender_JSON
level: info
All logs are getting generated in RollingFileAppender_JSON appender.

I found the answer to my question.
There are 2 solution I found to above problem.
1)
Loggers:
Logger:
- name: com.example
additivity: false
level: info
AppenderRef:
- ref: RollingFileAppender_Normal
- ref: RollingFileAppender_JSON
- level: info
2) By keeping 'additivity: false' only in the first logger
Loggers:
Logger:
- name: com.example
level: info
additivity: false
AppenderRef:
- ref: RollingFileAppender_Normal
level: info
- name: com.example
level: info
AppenderRef:
- ref: RollingFileAppender_JSON
level: info

Related

Unable to execute import-hive.sh

I am getting below error while running import-hive.sh
Could you please help me out on this?
hadoop#0.0.0.0:~/apache-atlas-2.1.0/hook/apache-atlas-hive-hook-2.1.0/hook-bin$ ./import-hive.sh
Using Hive configuration directory [/home/hadoop/hive/conf]
Log file for import is /home/hadoop/apache-atlas-2.1.0/hook/apache-atlas-hive-hook-2.1.0/logs/import-hive.log
2021-07-13T15:43:21,449 INFO [main] org.apache.atlas.ApplicationProperties - Looking for atlas-application.properties in classpath
2021-07-13T15:43:21,452 INFO [main] org.apache.atlas.ApplicationProperties - Loading atlas-application.properties from file:/home/hadoop/hive/conf/atlas-application.properties
2021-07-13T15:43:21,505 INFO [main] org.apache.atlas.ApplicationProperties - Using graphdb backend 'janus'
2021-07-13T15:43:21,505 INFO [main] org.apache.atlas.ApplicationProperties - Using storage backend 'hbase2'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Using index backend 'solr'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Atlas is running in MODE: PROD.
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting solr-wait-searcher property 'true'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting index.search.map-name property 'false'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting atlas.graph.index.search.max-result-set-size = 150
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache = true
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache-clean-wait = 20
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache-size = 0.5
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.tx-cache-size = 15000
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.tx-dirty-size = 120
Enter username for atlas :- admin
Enter password for atlas :-
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
at org.apache.atlas.AtlasBaseClient.getClient(AtlasBaseClient.java:287)
at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:454)
at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:449)
at org.apache.atlas.AtlasBaseClient.<init>(AtlasBaseClient.java:132)
at org.apache.atlas.AtlasClientV2.<init>(AtlasClientV2.java:94)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:134)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.security.authentication.client.ConnectionConfigurator
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 6 more
Failed to import Hive Meta Data!!!

Hyperledger Fabric configtxgen - Error reading config: map merge requires map or sequence of maps as the value

I'm trying to setup a simple Fabric network with the following:
Orderer Organization [abccoinOrderers]
Sample Organization [ABC]
After generating the all the necessary files using cryptogen tool, running the configtxgen command gives the following error:
student#abc:~/Desktop/fabric/network$ configtxgen -profile DefaultBlockOrderingService -outputBlock ./config/genesis.block -configPath $PWD
2019-12-26 12:35:42.131 MST [common.tools.configtxgen] main -> WARN 001 Omitting the channel ID for configtxgen for output operations is deprecated. Explicitly passing the channel ID will be required in the future, defaulting to 'testchainid'.
2019-12-26 12:35:42.136 MST [common.tools.configtxgen] main -> INFO 002 Loading configuration
2019-12-26 12:35:42.137 MST [common.tools.configtxgen.localconfig] Load -> PANI 003 Error reading configuration: While parsing config: yaml: map merge requires map or sequence of maps as the value
2019-12-26 12:35:42.137 MST [common.tools.configtxgen] func1 -> PANI 004 Error reading configuration: While parsing config: yaml: map merge requires map or sequence of maps as the value
panic: Error reading configuration: While parsing config: yaml: map merge requires map or sequence of maps as the value [recovered]
panic: Error reading configuration: While parsing config: yaml: map merge requires map or sequence of maps as the value
Here is the configtx.yaml
Organizations:
- &abccoinOrderers
Name: abccoinOrderersMSP
ID: abccoinOrderersMSP
MSPDir: crypto-config/ordererOrganizations/abccoin.com/msp
- &ABC
Name: ABCMSP
ID: ABCMSP
MSPDir: crypto-config/peerOrganizations/ABC.abccoin.com/msp
AnchorPeers:
- Host: Andy.ABC.abccoin.com
Port: 7051
Application: &ApplicationDefaults
Orderer:
- &DevModeOrdering
OrdererType: solo
Addresses:
- Devorderer.abccoin.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 1
Profiles:
DefaultBlockOrderingService:
Orderer:
<<: *DevModeOrdering
Organizations:
- *abccoinOrderers
Consortiums:
SampleConsortium:
Organizations:
- *ABC
abcMembersOnly:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *ABC
I've already tried rearranging the code blocks as mentioned in this post. I've also tried pacing the "<<" key in quotes as mentioned in this issue YML document "<<: value" cannot be parsed #245 but it didn't help.
There are 2 errors in theconfigtx.yaml.
Orderer: Is a map type or object type, not an array or slice type. When you define parameters using -, it is used as an array in yaml.
Orderer:
// remove -
&DevModeOrdering
OrdererType: solo
Addresses:
- Devorderer.abccoin.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 1
Application: You must declare Organizations: parameter. It can be empty. If you don't declare anything in that, it will not compile. To check you should try to convert the yaml into json in any online convertor.
Application: &ApplicationDefaults
Organizations:

Graylog error on Web Interface in .js file

I have a problem and I believe it may be due to my installation with HTTPS, I came to this conclusion simply because when installing over HTTP this does not happen, ie the problem is certainly due to the lack of any specific configuration in my docker-compose or something like that.
Below is the my docker-compose.yml file, error print and also the Stack Trace that the screen itself shows.
version: '3'
services:
# MongoDB: https://hub.docker.com/_/mongo/
mongo:
image: mongo:3
networks:
- graylog
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
volumes:
- es_data:/usr/share/elasticsearch/data
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
networks:
- graylog
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: csilveir/graylog
volumes:
- /home/ubuntu/graylog:/home/ubuntu/graylog
- /home/ubuntu/graylog/plugins/graylog-plugin-slack-notification-1.0.4.jar:/usr/share/graylog/plugin/graylog-plugin-slack-notification-1.0.4.jar
environment:
# (must be at least 16 characters)!
- GRAYLOG_ROOT_TIMEZONE=America/Sao_Paulo
- GRAYLOG_ROOT_EMAIL=dev#dragonvc.com.br
- GRAYLOG_IS_MASTER=true
# HTTPS
- GRAYLOG_HTTP_ENABLE_TLS=true
- GRAYLOG_HTTP_TLS_CERT_FILE=/home/ubuntu/graylog/graylog.crt
- GRAYLOG_HTTP_TLS_KEY_FILE=/home/ubuntu/graylog/graylog.key
- GRAYLOG_HTTP_PUBLISH_URI=https://graylog.dragonvc.com.br/
networks:
- graylog
depends_on:
- mongo
- elasticsearch
ports:
#- "80:9000"
- 80:443
- 443:9000
- 514:514
- 514:514/udp
- 1514:1514/udp
- 5044:5044
- 9000:9000
- 9350:9350
- 12200-12300:12200-12300
- 12200-12300:12200-12300/udp
- 12900:12900
networks:
graylog:
driver: bridge
# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
mongo_data:
driver: local
es_data:
driver: local
graylog_journal:
driver: local
Cannot set property 'data' of undefined
Stack Trace:
TypeError: Cannot set property '__data__' of undefined
at Array.ye.select (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:227338)
at Array.Z.insert (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:224227)
at Array.ye.insert (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:227450)
at SVGGElement.<anonymous> (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:350536)
at https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:226023
at me (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:222388)
at Array.Z.each (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:225997)
at Array.l (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:350305)
at Array.Z.call (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:83:226096)
at r._drawAxis (https://graylog.dragonvc.com.br/assets/vendors~LoggedInPage.af2f821c666e2573f8ad.js:46:42162)
at render (https://graylog.dragonvc.com.br/assets/vendors~LoggedInPage.af2f821c666e2573f8ad.js:46:41677)
at https://graylog.dragonvc.com.br/assets/vendors~LoggedInPage.af2f821c666e2573f8ad.js:46:40953
at https://graylog.dragonvc.com.br/assets/vendors~LoggedInPage.af2f821c666e2573f8ad.js:46:23606
at Array.forEach (<anonymous>)
at e.Graph.render (https://graylog.dragonvc.com.br/assets/vendors~LoggedInPage.af2f821c666e2573f8ad.js:46:23586)
at Object.drawResultGraph (https://graylog.dragonvc.com.br/assets/LoggedInPage.af2f821c666e2573f8ad.js:1:203535)
at t._renderHistogram (https://graylog.dragonvc.com.br/assets/LoggedInPage.af2f821c666e2573f8ad.js:1:218539)
at t.componentDidMount (https://graylog.dragonvc.com.br/assets/LoggedInPage.af2f821c666e2573f8ad.js:1:217792)
at t.componentDidMount (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:73:88989)
at Ro (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:82395)
at Xo (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:85070)
at https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:98277
at Object.exports.unstable_runWithPriority (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:35:3284)
at Os (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:98212)
at Ys (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:97988)
at Ss (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:97333)
at Ls (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:96354)
at Zo (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:95228)
at Object.enqueueSetState (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:18:44755)
at t.b.setState (https://graylog.dragonvc.com.br/assets/vendor.91c91d4a31d54d96392a.js:26:1665)
at https://graylog.dragonvc.com.br/assets/90afab18-75.af2f821c666e2573f8ad.js:1:2875
at l (https://graylog.dragonvc.com.br/assets/builtins.af2f821c666e2573f8ad.js:104:88608)
at O._settlePromiseFromHandler (https://graylog.dragonvc.com.br/assets/builtins.af2f821c666e2573f8ad.js:104:61890)
at O._settlePromise (https://graylog.dragonvc.com.br/assets/builtins.af2f821c666e2573f8ad.js:104:62690)
at O._settlePromise0 (https://graylog.dragonvc.com.br/assets/builtins.af2f821c666e2573f8ad.js:104:63389)
at O._settlePromises (https://graylog.dragonvc.com.br/assets/builtins.af2f821c666e2573f8ad.js:104:64716)
at https://graylog.dragonvc.com.br/assets/builtins.af2f821c666e2573f8ad.js:104:18338
Component Stack:
in LegacyHistogram
in div
in t
in div
in t
in t
in t
in SearchPage
in Unknown
in n
in div
in t
in div
in t
in div
in AppWithSearchBar
in div
in t
in t
in withRouter(t)
in div
in App
in RouterContext
in Router
in h
in t
in n
in AppFacade
This error occurs in a .js file, as shown in print:
I had this problem and resolved it by cleaning browser cache.

Apache Storm Flux Simple KafkaSpout --> KafkaBolt NullPointerException

I'm using Apache Storm 0.10.0-beta1 and started converting some topologies to Flux. I decided to start with a simple topology that reads from a Kafka queue and writes to a different Kafka queue. I get this error, which I am having a difficult time figuring out what is wrong. The topology yaml file follows the error.
Parsing file: /Users/frank/src/mapper/mapper.yaml
388 [main] INFO o.a.s.f.p.FluxParser - loading YAML from input stream...
391 [main] INFO o.a.s.f.p.FluxParser - Not performing property substitution.
391 [main] INFO o.a.s.f.p.FluxParser - Not performing environment variable substitution.
466 [main] INFO o.a.s.f.FluxBuilder - Detected DSL topology...
Exception in thread "main" java.lang.NullPointerException
at org.apache.storm.flux.FluxBuilder.canInvokeWithArgs(FluxBuilder.java:561)
at org.apache.storm.flux.FluxBuilder.findCompatibleConstructor(FluxBuilder.java:392)
at org.apache.storm.flux.FluxBuilder.buildObject(FluxBuilder.java:288)
at org.apache.storm.flux.FluxBuilder.buildSpout(FluxBuilder.java:361)
at org.apache.storm.flux.FluxBuilder.buildSpouts(FluxBuilder.java:349)
at org.apache.storm.flux.FluxBuilder.buildTopology(FluxBuilder.java:84)
at org.apache.storm.flux.Flux.runCli(Flux.java:153)
at org.apache.storm.flux.Flux.main(Flux.java:98)
Topology yaml:
name: "mapper-topology"
config:
topology.workers: 1
topology.debug: true
kafka.broker.properties.metadata.broker.list: "localhost:9092"
kafka.broker.properties.request.required.acks: "1"
kafka.broker.properties.serializer.class: "kafka.serializer.StringEncoder"
# component definitions
components:
- id: "topicSelector"
className: "storm.kafka.bolt.selector.DefaultTopicSelector"
constructorArgs:
- "schemaq"
- id: "kafkaMapper"
className: "storm.kafka.bolt.mapper.FieldNameBasedTupleToKafkaMapper"
# spout definitions
spouts:
- id: "kafka-spout"
className: "storm.kafka.SpoutConfig"
parallelism: 1
constructorArgs:
- ref: "zkHosts"
- "mapperq"
- "/mapperq"
- "id-mapperq"
properties:
- name: "forceFromStart"
value: true
- name: "scheme"
ref: "stringMultiScheme"
# bolt definitions
bolts:
- id: "kafka-bolt"
className: "storm.kafka.bolt.KafkaBolt"
parallelism: 1
configMethods:
- name: "withTopicSelector"
args: [ref: "topicSelector"]
- name: "withTupleToKafkaMapper"
args: [ref: "kafkaMapper"]
# streams
streams:
- name: "kafka-spout --> kafka-bolt"
from: "kafka-spout"
to: "kafka-bolt"
grouping:
type: SHUFFLE
And here is the command:
storm jar /Users/frank/src/mapper/target/mapper-0.1.0-SNAPSHOT-standalone.jar org.apache.storm.flux.Flux --local mapper.yaml
spout classname should be storm.kafka.KafkaSpout, not storm.kafka.SpoutConfig. You should define SpoutConfig to "components" section, and let spout refer this.
You can refer https://github.com/apache/storm/blob/master/external/flux/flux-examples/src/main/resources/kafka_spout.yaml to see how to setup KafkaSpout from flux.

Flume: kafka channel and hdfs sink get unable to deliver event error

I want to try this new Flafka flow: only use kafka channel transfer data to hdfs sink. I tried it from kafka channel and logger sink which is easier to monitor. My configuration file is:
# Name the components on this agent
a1.sinks = sink1
a1.channels = channel1
a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.channel1.brokerList = localhost:9093,localhost:9094
a1.channels.channel1.topic = par4
a1.channels.channel1.zookeeperConnect = localhost:2181
a1.channels.channel1.parseAsFlumeEvent = false
a1.channels.cnannel1.kafka.consumer.timeout.ms = 1000000
a1.sinks.sink1.channel = channel1
a1.sinks.sink1.type = logger
I set up zookeeper and two brokers locally using above port number, and I have a producer client keep push messages to kafka.
I got following messages:
2015-07-02 20:22:37,619 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2015-07-02 20:22:37,623 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:conf/example.conf
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:sink1
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:sink1
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:931)] Added sinks: sink1 Agent: a1
2015-07-02 20:22:37,633 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSources(FlumeConfiguration.java:508)] Agent configuration for 'a1' has no sources.
2015-07-02 20:22:37,635 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:141)] Post-validation flume configuration contains configuration for agents: [a1]
2015-07-02 20:22:37,635 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:145)] Creating channels
2015-07-02 20:22:37,639 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel channel1 type org.apache.flume.channel.kafka.KafkaChannel
2015-07-02 20:22:37,650 (conf-file-poller-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.configure(KafkaChannel.java:168)] Group ID was not specified. Using flume as the group id.
2015-07-02 20:22:37,658 (conf-file-poller-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.configure(KafkaChannel.java:188)] {metadata.broker.list=localhost:9093,localhost:9094, request.required.acks=-1, group.id=flume, zookeeper.connect=localhost:2181, consumer.timeout.ms=100, auto.commit.enable=false}
2015-07-02 20:22:37,665 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:200)] Created channel channel1
2015-07-02 20:22:37,666 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: sink1, type: logger
2015-07-02 20:22:37,669 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:114)] Channel channel1 connected to [sink1]
2015-07-02 20:22:37,674 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{} sinkRunners:{sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#3362ba9e counterGroup:{ name:null counters:{} } }} channels:{channel1=org.apache.flume.channel.kafka.KafkaChannel{name: channel1}} }
2015-07-02 20:22:37,675 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel channel1
2015-07-02 20:22:37,677 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:96)] Starting Kafka Channel: channel1
2015-07-02 20:22:37,885 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property auto.commit.enable is not valid
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property consumer.timeout.ms is not valid
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property group.id is not valid
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property metadata.broker.list is overridden to localhost:9093,localhost:9094
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property request.required.acks is overridden to -1
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property zookeeper.connect is not valid
2015-07-02 20:22:37,929 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:99)] Topic = par4
2015-07-02 20:22:37,929 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: CHANNEL, name: channel1: Successfully registered new MBean.
2015-07-02 20:22:37,930 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: CHANNEL, name: channel1 started
2015-07-02 20:22:37,930 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink sink1
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property auto.commit.enable is overridden to false
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property consumer.timeout.ms is overridden to 100
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property group.id is overridden to flume
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property metadata.broker.list is not valid
2015-07-02 20:22:37,940 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property request.required.acks is not valid
2015-07-02 20:22:37,942 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property zookeeper.connect is overridden to localhost:2181
2015-07-02 20:22:37,951 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] [flume_MACC02PHH5LG3QC-1435893757951-c4c69fb7], Connecting to zookeeper instance at localhost:2181
2015-07-02 20:22:37,952 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
java.lang.IllegalStateException: close() called when transaction is OPEN - you must either commit or rollback first
at com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at org.apache.flume.channel.BasicTransactionSemantics.close(BasicTransactionSemantics.java:179)
at org.apache.flume.sink.LoggerSink.process(LoggerSink.java:105)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
^C2015-07-02 20:22:39,497 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:79)] Stopping lifecycle supervisor 12
2015-07-02 20:22:39,499 (agent-shutdown-hook) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Shutting down producer
2015-07-02 20:22:39,499 (agent-shutdown-hook) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Closing all sync producers
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:150)] Component type: CHANNEL, name: channel1 stopped
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:156)] Shutdown Metric for type: CHANNEL, name: channel1. channel.start.time == 1435893757930
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:162)] Shutdown Metric for type: CHANNEL, name: channel1. channel.stop.time == 1435893759501
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.capacity == 0
2015-07-02 20:22:39,502 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.current.size == 0
2015-07-02 20:22:39,502 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.attempt == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.success == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.attempt == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.success == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.commit.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.event.get.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.event.send.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.rollback.count == 0
2015-07-02 20:22:39,505 (agent-shutdown-hook) [INFO - org.apache.flume.channel.kafka.KafkaChannel.stop(KafkaChannel.java:123)] Kafka channel channel1 stopped. Metrics: CHANNEL:channel1{channel.event.put.attempt=0, channel.event.put.success=0, channel.kafka.event.get.time=0, channel.current.size=0, channel.event.take.attempt=0, channel.event.take.success=0, channel.kafka.event.send.time=0, channel.capacity=0, channel.kafka.commit.time=0, channel.rollback.count=0}
2015-07-02 20:22:39,505 (agent-shutdown-hook) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.stop(PollingPropertiesFileConfigurationProvider.java:83)] Configuration provider stopping
I don't understand why I have this unable to deliver event error. (I also tried to set up HDFS sink which gives me the same error.)
I also don't understand why I didn't successfully set consumer.timeout.ms
Looking for help, thanks!
Based on the answer from the community, this question can be solved by following two JIRA topic.
https://issues.apache.org/jira/browse/FLUME-2734
https://issues.apache.org/jira/browse/FLUME-2735

Resources