How to solve MQJE001: Completion Code '2', Reason '2085' - ibm-mq

I am writing to an MQ queue from Java and I am intermittently get the error response below. I am using IBM MQ version 9.
What could be the cause of this as its intermittent and the queue / queue manager being written to exists and was running during this time.
[INFO ] 2020-06-13 22:48:03.752+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Finished establishing a connection to DB
[INFO ] 2020-06-13 22:48:03.752+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - init
[INFO ] 2020-06-13 22:48:03.758+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - 5. Before calling write.selectQMgr()
[INFO ] 2020-06-13 22:48:03.864+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - 6. After selecting Queue Manager name
[DEBUG] 2020-06-13 22:48:03.876+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - ReasonCode:2085
[DEBUG] 2020-06-13 22:48:03.877+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Completion Code:2
[ERROR] 2020-06-13 22:48:03.877+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Message:MQJE001: Completion Code '2', Reason '2085'.
com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2085'
at com.ibm.mq.MQDestination.open(MQDestination.java:322) ~[com.ibm.mq.jar:9.0.0.5 - p900-005-180821]
at com.ibm.mq.MQQueue.<init>(MQQueue.java:236) ~[com.ibm.mq.jar:9.0.0.5 - p900-005-180821]
at com.ibm.mq.MQQueueManager.accessQueue(MQQueueManager.java:3288) ~[com.ibm.mq.jar:9.0.0.5 - p900-005-180821]
at custom.MQWriteFile.write(MQWriteFile.java:364) ~[PGPEncryptedSOAPWMQWriter.jar:?]
at custom.MQWriteFile.<init>(MQWriteFile.java:221) [PGPEncryptedSOAPWMQWriter.jar:?]
at custom.PGPEncryptedSOAPWMQWriter.main(PGPEncryptedSOAPWMQWriter.java:69) [PGPEncryptedSOAPWMQWriter.jar:?]
[INFO ] 2020-06-13 22:48:03.879+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - LogStatusInDB
[DEBUG] 2020-06-13 22:48:03.911+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Reason Code Desc:MQRC_UNKNOWN_OBJECT_NAME
[DEBUG] 2020-06-13 22:48:03.911+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Completion Code Desc:MQCC_FAILED
[DEBUG] 2020-06-13 22:48:03.911+0300 [main] [e5643f16-94ea-436f-ad71-54bee1c91381] MQWriteFile - Returning with:3

Most likely the cause will be logic flow related with variables or objects falling out of scope, then coming back into scope with reset / default values.
The traces that you are running, will tell you which values your code is actually using. You will most likely need to add logging into your application to determine why the values are being lost.

Related

Unable to execute import-hive.sh

I am getting below error while running import-hive.sh
Could you please help me out on this?
hadoop#0.0.0.0:~/apache-atlas-2.1.0/hook/apache-atlas-hive-hook-2.1.0/hook-bin$ ./import-hive.sh
Using Hive configuration directory [/home/hadoop/hive/conf]
Log file for import is /home/hadoop/apache-atlas-2.1.0/hook/apache-atlas-hive-hook-2.1.0/logs/import-hive.log
2021-07-13T15:43:21,449 INFO [main] org.apache.atlas.ApplicationProperties - Looking for atlas-application.properties in classpath
2021-07-13T15:43:21,452 INFO [main] org.apache.atlas.ApplicationProperties - Loading atlas-application.properties from file:/home/hadoop/hive/conf/atlas-application.properties
2021-07-13T15:43:21,505 INFO [main] org.apache.atlas.ApplicationProperties - Using graphdb backend 'janus'
2021-07-13T15:43:21,505 INFO [main] org.apache.atlas.ApplicationProperties - Using storage backend 'hbase2'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Using index backend 'solr'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Atlas is running in MODE: PROD.
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting solr-wait-searcher property 'true'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting index.search.map-name property 'false'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting atlas.graph.index.search.max-result-set-size = 150
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache = true
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache-clean-wait = 20
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache-size = 0.5
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.tx-cache-size = 15000
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.tx-dirty-size = 120
Enter username for atlas :- admin
Enter password for atlas :-
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
at org.apache.atlas.AtlasBaseClient.getClient(AtlasBaseClient.java:287)
at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:454)
at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:449)
at org.apache.atlas.AtlasBaseClient.<init>(AtlasBaseClient.java:132)
at org.apache.atlas.AtlasClientV2.<init>(AtlasClientV2.java:94)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:134)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.security.authentication.client.ConnectionConfigurator
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 6 more
Failed to import Hive Meta Data!!!

Why so many connections are used by Spring reactive with Mongo

I got the exception 'MongoWaitQueueFullException' and I realize the number of connections that my application is using. I use the default configuration of Spring boot (2.2.7.RELEASE) with reactive MongoDB (4.2.8). Transactions are used.
Even when running an integration test that basically creates a bit more than 200 elements then groups them (200 groups). 10 connections are used. When this algorithm is executed over a real data-set, this exception is thrown. The default limit of the waiting queue (500) was reached. This does not make the application scalable.
My question is: is there a way to design a reactive application that helps to reduce the number of connections?
This is the output of my test. Basically, it scans all translations of bundle files and them group them per translation key. An element is persisted per translation key.
return Flux
.fromIterable(bundleFile.getFiles())
.map(ScannedBundleFileEntry::getLocale)
.flatMap(locale ->
handler
.scanTranslations(bundleFileEntity.toLocation(), locale, context)
.index()
.map(indexedTranslation ->
createTranslation(
workspaceEntity,
bundleFileEntity,
locale.getId(),
indexedTranslation.getT1(), // index
indexedTranslation.getT2().getKey(), // bundle key
indexedTranslation.getT2().getValue() // translation
)
)
.flatMap(bundleKeyTemporaryRepository::save)
)
.thenMany(groupIntoBundleKeys(bundleFileEntity))
.then(bundleKeyTemporaryRepository.deleteByBundleFile(bundleFileEntity.getId()))
.then(Mono.just(bundleFileEntity));
The grouping function:
private Flux<BundleKeyEntity> groupIntoBundleKeys(BundleFileEntity bundleFile) {
return this
.findBundleKeys(bundleFile)
.groupBy(BundleKeyGroupKey::new)
.flatMap(bundleKeyGroup ->
bundleKeyGroup
.collectList()
.map(bundleKeys -> {
final BundleKeyGroupKey key = bundleKeyGroup.key();
final BundleKeyEntity entity = new BundleKeyEntity(key.getWorkspace(), key.getBundleFile(), key.getKey());
bundleKeys.forEach(entity::mergeInto);
return entity;
})
)
.flatMap(bundleKeyEntityRepository::save);
}
The test output:
560 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Neither #ContextConfiguration nor #ContextHierarchy found for test class [be.sgerard.i18n.controller.TranslationControllerTest], using SpringBootContextLoader
569 [main] INFO o.s.t.c.s.AbstractContextLoader - Could not detect default resource locations for test class [be.sgerard.i18n.controller.TranslationControllerTest]: no resource found for suffixes {-context.xml, Context.groovy}.
870 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener, org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener, org.springframework.test.context.event.EventPublishingTestExecutionListener, org.springframework.security.test.context.support.WithSecurityContextTestExecutionListener, org.springframework.security.test.context.support.ReactorContextTestExecutionListener]
897 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#4372b9b6, org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener#232a7d73, org.springframework.boot.test.autoconfigure.SpringBootDependencyInjectionTestExecutionListener#4b41e4dd, org.springframework.test.context.support.DirtiesContextTestExecutionListener#22ffa91a, org.springframework.test.context.transaction.TransactionalTestExecutionListener#74960bfa, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#42721fe, org.springframework.test.context.event.EventPublishingTestExecutionListener#40844aab, org.springframework.security.test.context.support.WithSecurityContextTestExecutionListener#1f6c9cd8, org.springframework.security.test.context.support.ReactorContextTestExecutionListener#5b619d14, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener#66746f57, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener#447a020, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener#7f36662c, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener#28e8dde3, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener#6d23017e]
1551 [background-preinit] INFO o.h.v.i.x.c.ValidationBootstrapParameters - HV000006: Using org.hibernate.validator.HibernateValidator as validation provider.
1677 [main] INFO b.s.i.c.TranslationControllerTest - Starting TranslationControllerTest on sgerard with PID 538 (started by sgerard in /home/sgerard/sandboxes/github-oauth/server)
1678 [main] INFO b.s.i.c.TranslationControllerTest - The following profiles are active: test
3250 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Bootstrapping Spring Data Reactive MongoDB repositories in DEFAULT mode.
3747 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 493ms. Found 9 Reactive MongoDB repository interfaces.
5143 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.method.configuration.ReactiveMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.ReactiveMethodSecurityConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
5719 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
5996 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d46', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:4337}] to localhost:27017
6010 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d46', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 8]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=12207332, setName='rs0', canonicalAddress=4802c4aff450:27017, hosts=[4802c4aff450:27017], passives=[], arbiters=[], primary='4802c4aff450:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000013, setVersion=1, lastWriteDate=Sun Aug 23 12:46:30 CEST 2020, lastUpdateTimeNanos=384505436362981}
6019 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
6040 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d47', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:4338}] to localhost:27017
6042 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d47', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 8]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1727974, setName='rs0', canonicalAddress=4802c4aff450:27017, hosts=[4802c4aff450:27017], passives=[], arbiters=[], primary='4802c4aff450:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000013, setVersion=1, lastWriteDate=Sun Aug 23 12:46:30 CEST 2020, lastUpdateTimeNanos=384505468960066}
7102 [nioEventLoopGroup-2-2] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:3, serverValue:4339}] to localhost:27017
11078 [main] INFO o.s.b.a.e.web.EndpointLinksResolver - Exposing 1 endpoint(s) beneath base path ''
11158 [main] INFO o.h.v.i.x.c.ValidationBootstrapParameters - HV000006: Using org.hibernate.validator.HibernateValidator as validation provider.
11720 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:4, serverValue:4340}] to localhost:27017
12084 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService 'taskScheduler'
12161 [main] INFO b.s.i.c.TranslationControllerTest - Started TranslationControllerTest in 11.157 seconds (JVM running for 13.532)
20381 [nioEventLoopGroup-2-3] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:4341}] to localhost:27017
20408 [nioEventLoopGroup-2-2] INFO b.s.i.s.w.WorkspaceManagerImpl - Synchronize, there is no workspace for the branch [master], let's create it.
20416 [nioEventLoopGroup-2-3] INFO b.s.i.s.w.WorkspaceManagerImpl - The workspace [master] alias [e3cea374-0d37-4c57-bdbf-8bd14d279c12] has been created.
20421 [nioEventLoopGroup-2-3] INFO b.s.i.s.w.WorkspaceManagerImpl - Initializing workspace [master] alias [e3cea374-0d37-4c57-bdbf-8bd14d279c12].
20525 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/main/resources/i18n] named [exception] with 2 file(s).
20812 [nioEventLoopGroup-2-4] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:6, serverValue:4342}] to localhost:27017
21167 [nioEventLoopGroup-2-8] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:10, serverValue:4345}] to localhost:27017
21167 [nioEventLoopGroup-2-6] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:8, serverValue:4344}] to localhost:27017
21393 [nioEventLoopGroup-2-5] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:7, serverValue:4343}] to localhost:27017
21398 [nioEventLoopGroup-2-7] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:9, serverValue:4346}] to localhost:27017
21442 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/main/resources/i18n] named [validation] with 2 file(s).
21503 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/test/resources/be/sgerard/i18n/service/i18n/file] named [file] with 2 file(s).
21621 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [front/src/main/web/src/assets/i18n] named [i18n] with 2 file(s).
22745 [SpringContextShutdownHook] INFO o.s.s.c.ThreadPoolTaskScheduler - Shutting down ExecutorService 'taskScheduler'
22763 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:4, serverValue:4340}] to localhost:27017 because the pool has been closed.
22766 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:9, serverValue:4346}] to localhost:27017 because the pool has been closed.
22767 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:6, serverValue:4342}] to localhost:27017 because the pool has been closed.
22768 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:8, serverValue:4344}] to localhost:27017 because the pool has been closed.
22768 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:5, serverValue:4341}] to localhost:27017 because the pool has been closed.
22769 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:10, serverValue:4345}] to localhost:27017 because the pool has been closed.
22770 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:7, serverValue:4343}] to localhost:27017 because the pool has been closed.
22776 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:3, serverValue:4339}] to localhost:27017 because the pool has been closed.
Process finished with exit code 0
Spring Reactive is asynchronous. Imagine you have 3 items in your dataset. It opens a connection for the save of the first item. But it won't wait for it to finish and use for the second save. Instead, it opens a second connection as soon as possible. Thus you'll end up overloading all the possible connections in the pool.

intellij idea debug mode can not terminated

when i use intellij idea debug mode, i set a breakpoints ,it paused where i set successfully.when i do not want go on and push the stop button ,the process should be terminated . but the process carried on .for example.
logger.info("fixGsDataAccount start");
logger.info("before delete all cnt:{}",cnt);
logger.info("query duplicate data include self cnt:{}",cnt);
logger.info("delete duplicate data end!");
logger.info("after delete all cnt:{}",afterDeleteCnt);
logger.info("fixGsDataAccount end");
i set a breakpoints at the 3rd row .
logger.info("query duplicate data include self cnt:{}",cnt);
and i push stop button , the log sholud shotp here like below ,
05-07 17:57:47.536 [main] [INFO ] fixGsDataAccount start -
c.wzt.web.datafix.FixGsDataAccount:33
05-07 17:57:47.540 [main] [INFO ] before delete all cnt:675 -
c.wzt.web.datafix.FixGsDataAccount:35
05-07 17:57:47.545 [main] [INFO ] query duplicate data include self cnt:1 -
c.wzt.web.datafix.FixGsDataAccount:37
but it show like below
05-07 17:57:47.536 [main] [INFO ] fixGsDataAccount start -
c.wzt.web.datafix.FixGsDataAccount:33
05-07 17:57:47.540 [main] [INFO ] before delete all cnt:675 -
c.wzt.web.datafix.FixGsDataAccount:35
05-07 17:57:47.545 [main] [INFO ] query duplicate data include self cnt:1 -
c.wzt.web.datafix.FixGsDataAccount:37
05-07 17:57:47.546 [main] [INFO ] fixGsDataAccount end -
c.wzt.web.datafix.FixGsDataAccount:53
the last logger still print out
Enable the Kill the debug process immediately option:

Flume: kafka channel and hdfs sink get unable to deliver event error

I want to try this new Flafka flow: only use kafka channel transfer data to hdfs sink. I tried it from kafka channel and logger sink which is easier to monitor. My configuration file is:
# Name the components on this agent
a1.sinks = sink1
a1.channels = channel1
a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.channel1.brokerList = localhost:9093,localhost:9094
a1.channels.channel1.topic = par4
a1.channels.channel1.zookeeperConnect = localhost:2181
a1.channels.channel1.parseAsFlumeEvent = false
a1.channels.cnannel1.kafka.consumer.timeout.ms = 1000000
a1.sinks.sink1.channel = channel1
a1.sinks.sink1.type = logger
I set up zookeeper and two brokers locally using above port number, and I have a producer client keep push messages to kafka.
I got following messages:
2015-07-02 20:22:37,619 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2015-07-02 20:22:37,623 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:conf/example.conf
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:sink1
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:sink1
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:931)] Added sinks: sink1 Agent: a1
2015-07-02 20:22:37,633 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSources(FlumeConfiguration.java:508)] Agent configuration for 'a1' has no sources.
2015-07-02 20:22:37,635 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:141)] Post-validation flume configuration contains configuration for agents: [a1]
2015-07-02 20:22:37,635 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:145)] Creating channels
2015-07-02 20:22:37,639 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel channel1 type org.apache.flume.channel.kafka.KafkaChannel
2015-07-02 20:22:37,650 (conf-file-poller-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.configure(KafkaChannel.java:168)] Group ID was not specified. Using flume as the group id.
2015-07-02 20:22:37,658 (conf-file-poller-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.configure(KafkaChannel.java:188)] {metadata.broker.list=localhost:9093,localhost:9094, request.required.acks=-1, group.id=flume, zookeeper.connect=localhost:2181, consumer.timeout.ms=100, auto.commit.enable=false}
2015-07-02 20:22:37,665 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:200)] Created channel channel1
2015-07-02 20:22:37,666 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: sink1, type: logger
2015-07-02 20:22:37,669 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:114)] Channel channel1 connected to [sink1]
2015-07-02 20:22:37,674 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{} sinkRunners:{sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#3362ba9e counterGroup:{ name:null counters:{} } }} channels:{channel1=org.apache.flume.channel.kafka.KafkaChannel{name: channel1}} }
2015-07-02 20:22:37,675 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel channel1
2015-07-02 20:22:37,677 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:96)] Starting Kafka Channel: channel1
2015-07-02 20:22:37,885 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property auto.commit.enable is not valid
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property consumer.timeout.ms is not valid
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property group.id is not valid
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property metadata.broker.list is overridden to localhost:9093,localhost:9094
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property request.required.acks is overridden to -1
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property zookeeper.connect is not valid
2015-07-02 20:22:37,929 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:99)] Topic = par4
2015-07-02 20:22:37,929 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: CHANNEL, name: channel1: Successfully registered new MBean.
2015-07-02 20:22:37,930 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: CHANNEL, name: channel1 started
2015-07-02 20:22:37,930 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink sink1
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property auto.commit.enable is overridden to false
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property consumer.timeout.ms is overridden to 100
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property group.id is overridden to flume
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property metadata.broker.list is not valid
2015-07-02 20:22:37,940 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property request.required.acks is not valid
2015-07-02 20:22:37,942 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property zookeeper.connect is overridden to localhost:2181
2015-07-02 20:22:37,951 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] [flume_MACC02PHH5LG3QC-1435893757951-c4c69fb7], Connecting to zookeeper instance at localhost:2181
2015-07-02 20:22:37,952 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
java.lang.IllegalStateException: close() called when transaction is OPEN - you must either commit or rollback first
at com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at org.apache.flume.channel.BasicTransactionSemantics.close(BasicTransactionSemantics.java:179)
at org.apache.flume.sink.LoggerSink.process(LoggerSink.java:105)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
^C2015-07-02 20:22:39,497 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:79)] Stopping lifecycle supervisor 12
2015-07-02 20:22:39,499 (agent-shutdown-hook) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Shutting down producer
2015-07-02 20:22:39,499 (agent-shutdown-hook) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Closing all sync producers
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:150)] Component type: CHANNEL, name: channel1 stopped
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:156)] Shutdown Metric for type: CHANNEL, name: channel1. channel.start.time == 1435893757930
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:162)] Shutdown Metric for type: CHANNEL, name: channel1. channel.stop.time == 1435893759501
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.capacity == 0
2015-07-02 20:22:39,502 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.current.size == 0
2015-07-02 20:22:39,502 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.attempt == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.success == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.attempt == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.success == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.commit.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.event.get.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.event.send.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.rollback.count == 0
2015-07-02 20:22:39,505 (agent-shutdown-hook) [INFO - org.apache.flume.channel.kafka.KafkaChannel.stop(KafkaChannel.java:123)] Kafka channel channel1 stopped. Metrics: CHANNEL:channel1{channel.event.put.attempt=0, channel.event.put.success=0, channel.kafka.event.get.time=0, channel.current.size=0, channel.event.take.attempt=0, channel.event.take.success=0, channel.kafka.event.send.time=0, channel.capacity=0, channel.kafka.commit.time=0, channel.rollback.count=0}
2015-07-02 20:22:39,505 (agent-shutdown-hook) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.stop(PollingPropertiesFileConfigurationProvider.java:83)] Configuration provider stopping
I don't understand why I have this unable to deliver event error. (I also tried to set up HDFS sink which gives me the same error.)
I also don't understand why I didn't successfully set consumer.timeout.ms
Looking for help, thanks!
Based on the answer from the community, this question can be solved by following two JIRA topic.
https://issues.apache.org/jira/browse/FLUME-2734
https://issues.apache.org/jira/browse/FLUME-2735

ftp file upload with camel

i'm trying to upload file via ftp using camel. My code is the following
public static void main(String... args) throws Exception {
CamelContext context = new DefaultCamelContext();
context.addRoutes(new RouteBuilder() {
#Override
public void configure() {
from("file:src/data?noop=true").to("ftp://myftp.com/ftp-test/?username=drauxxx&password=mypassword");
}
});
context.start();
Thread.sleep(10000);
context.stop();
}
It connects fine but (as you can see in the log ) but when i'm trying to store i get this error:
[1) thread #1 - file://src/data] FileConsumer DEBUG Took 0.003 seconds to poll: src/data
[1) thread #1 - file://src/data] FileConsumer DEBUG Total 1 files to consume
[1) thread #1 - file://src/data] FileConsumer DEBUG About to process file: GenericFile[ReadMe.txt] using exchange: Exchange[ReadMe.txt]
[1) thread #1 - file://src/data] SendProcessor DEBUG >>>> Endpoint[ftp://drxxx.com/ftp-test/?password=******&username=drau9546] Exchange[ReadMe.txt]
[1) thread #1 - file://src/data] RemoteFileProducer DEBUG Not already connected/logged in. Connecting to: Endpoint[ftp://drxxx.com/ftp-test/?password=******&username=drau9546]
[1) thread #1 - file://src/data] RemoteFileProducer INFO Connected and logged in to: Endpoint[ftp://drxxx.com/ftp-test/?password=******&username=drau9546]
[1) thread #1 - file://src/data] GenericFileConverter DEBUG Read file src/data/ReadMe.txt (no charset)
[ main] DefaultCamelContext INFO Apache Camel 2.10.0 (CamelContext: camel-1) is shutting down
[ main] DefaultShutdownStrategy INFO Starting to graceful shutdown 1 routes (timeout 300 seconds)
[ main] DefaultExecutorServiceManager DEBUG Created new ThreadPool for source: org.apache.camel.impl.DefaultShutdownStrategy#77addb59 with name: ShutdownTask. -> org.apache.camel.util.concurrent.RejectableThreadPoolExecutor#bdccedd
[el-1) thread #2 - ShutdownTask] DefaultShutdownStrategy DEBUG There are 1 routes to shutdown
[el-1) thread #2 - ShutdownTask] DefaultShutdownStrategy DEBUG Route: route1 suspended and shutdown deferred, was consuming from: Endpoint[file://src/data?noop=true]
[el-1) thread #2 - ShutdownTask] DefaultShutdownStrategy INFO Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 300 seconds.
[el-1) thread #2 - ShutdownTask] DefaultShutdownStrategy INFO Waiting as there are still 2 inflight and pending exchanges to complete, timeout in 299 seconds.
....
[1) thread #1 - file://src/data] RemoteFileProducer WARN Writing file failed with: Error writing file [ftp-test/ReadMe.txt]
[1) thread #1 - file://src/data] RemoteFileProducer DEBUG Disconnecting from: Endpoint[ftp://drxxx.com/ftp-test/?password=******&username=drau9546]
[1) thread #1 - file://src/data] DefaultErrorHandler DEBUG Failed delivery for (MessageId: ID-raccoonix-55914-1347631981229-0-1 on ExchangeId: ID-raccoonix-55914-1347631981229-0-2). On delivery attempt: 0 caught: org.apache.camel.component.file.GenericFileOperationFailedException: Error writing file [ftp-test/ReadMe.txt]
[1) thread #1 - file://src/data] GenericFileOnCompletion DEBUG Done processing file: GenericFile[ReadMe.txt] using exchange: Exchange[ReadMe.txt]
[1) thread #1 - file://src/data] GenericFileOnCompletion WARN Rollback file strategy: org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy#2c31f2a7 for file: GenericFile[ReadMe.txt]
[1) thread #1 - file://src/data] FileUtil DEBUG Retrying attempt 0 to delete file: /home/andrea/workspace/transfer/src/data/ReadMe.txt.camelLock
[1) thread #1 - file://src/data] FileUtil DEBUG Tried 1 to delete file: /home/andrea/workspace/transfer/src/data/ReadMe.txt.camelLock with result: true
[1) thread #1 - file://src/data] DefaultErrorHandler ERROR Failed delivery for (MessageId: ID-raccoonix-55914-1347631981229-0-1 on ExchangeId: ID-raccoonix-55914-1347631981229-0-2). Exhausted after delivery attempt: 1 caught: org.apache.camel.component.file.GenericFileOperationFailedException: Error writing file [ftp-test/ReadMe.txt]
Are there some specific parameters to for a correct file transfer?
Is a ftp server related problem?
Try
from("file:src/data?noop=true").to("ftp://drauxxx#myftp.com/ftp-test/?password=mypassword");
Remove
context.stop();
as it's an experimental code, you don't need it.
Also check The user right of the FTP User, that you are connecting with.

Resources