How can I use Apache HTTP pooling connection manager with Spring WS? - spring-webclient

The connection seems to be pooled but not reused
Reuse of connections...
2022-11-24 21:08:50,934 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection request: [route: {s}->https://services.mybank.nl:18290][total available: 4; route allocated: 4 of 4; total allocated: 4 of 4]
2022-11-24 21:08:50,934 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.DefaultManagedHttpClientConnection - http-outgoing-0: Close connection
2022-11-24 21:08:50,941 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection leased: [id: 4][route: {s}->https://services.mybank.nl:18290][total available: 3; route allocated: 4 of 4; total allocated: 4 of 4]
2022-11-24 21:08:50,941 [ReceiveDepth:1] DEBUG org.apache.http.impl.execchain.MainClientExec - Opening connection {s}->https://services.mybank.nl:18290
2022-11-24 21:08:50,941 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.DefaultHttpClientConnectionOperator - Connecting to services.mybank.nl/10.239.166.11:18290
2022-11-24 21:08:50,969 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.DefaultHttpClientConnectionOperator - Connection established 10.239.148.30:33114<->10.239.166.11:18290
2022-11-24 21:08:50,969 [ReceiveDepth:1] DEBUG org.apache.http.impl.execchain.MainClientExec - Executing request POST /migrate HTTP/1.1
2022-11-24 21:08:50,969 [ReceiveDepth:1] DEBUG org.apache.http.impl.execchain.MainClientExec - Target auth state: UNCHALLENGED
2022-11-24 21:08:50,969 [ReceiveDepth:1] DEBUG org.apache.http.impl.execchain.MainClientExec - Proxy auth state: UNCHALLENGED
2022-11-24 21:08:50,983 [ReceiveDepth:1] DEBUG org.apache.http.impl.execchain.MainClientExec - Connection can be kept alive indefinitely
2022-11-24 21:08:50,984 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection [id: 4][route: {s}->https://services.mybank.nl:18290][state: CN=rc-uat3.rf.mybank.nl, OU=BSRC Groep ICT, O=Cooperatieve Mybank U.A., L=Utrecht, ST=Utrecht, C=NL] can be kept alive indefinitely
2022-11-24 21:08:50,984 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.DefaultManagedHttpClientConnection - http-outgoing-4: set socket timeout to 0
2022-11-24 21:08:50,984 [ReceiveDepth:1] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection released: [id: 4][route: {s}->https://services.mybank.nl:18290][state: CN=rc-uat3.rf.mybank.nl, OU=BSRC Groep ICT, O=Cooperatieve Mybank U.A., L=Utrecht, ST=Utrecht, C=NL][total available: 4; route allocated: 4 of 4; total allocated: 4 of 4]

Related

How can I turn off netty client DNS connect retry?

I'm using Netty-httpClient for Spring webClient.
When I connect www.naver.com and set the host like this.
127.0.0.1 www.naver.com
125.209.222.142 www.naver.com
And run httpClient.get() result like this.
DEBUG r.n.r.PooledConnectionProvider - [id:83c8069f] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG r.n.t.SslProvider - [id:83c8069f] SSL enabled using engine sun.security.ssl.SSLEngineImpl#5412b204 and SNI www.naver.com:443
DEBUG i.n.b.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
DEBUG i.n.b.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
DEBUG i.n.u.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#7ec1e7a5
DEBUG r.n.t.TransportConfig - [id:83c8069f] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = reactor.netty.transport.logging.ReactorNettyLoggingHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
INFO reactor - [id:83c8069f] REGISTERED
DEBUG r.n.t.TransportConnector - [id:83c8069f] Connecting to [www.naver.com/127.0.0.1:443].
INFO reactor - [id:83c8069f] CONNECT: www.naver.com/127.0.0.1:443
INFO reactor - [id:83c8069f] CLOSE
DEBUG r.n.t.TransportConnector - [id:83c8069f] Connect attempt to [www.naver.com/127.0.0.1:443] failed.
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: www.naver.com/127.0.0.1:443
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
DEBUG r.n.r.PooledConnectionProvider - [id:0e3c272c] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG r.n.t.SslProvider - [id:0e3c272c] SSL enabled using engine sun.security.ssl.SSLEngineImpl#47b241b and SNI www.naver.com:443
DEBUG r.n.t.TransportConfig - [id:0e3c272c] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = reactor.netty.transport.logging.ReactorNettyLoggingHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
INFO reactor - [id:0e3c272c] REGISTERED
INFO reactor - [id:83c8069f] UNREGISTERED
DEBUG r.n.t.TransportConnector - [id:0e3c272c] Connecting to [www.naver.com/125.209.222.142:443].
INFO reactor - [id:0e3c272c] CONNECT: www.naver.com/125.209.222.142:443
DEBUG r.n.r.DefaultPooledConnectionProvider - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] Registering pool release on close event for channel
DEBUG r.n.r.PooledConnectionProvider - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
DEBUG i.n.u.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
DEBUG i.n.u.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
DEBUG i.n.u.Recycler - -Dio.netty.recycler.linkCapacity: 16
DEBUG i.n.u.Recycler - -Dio.netty.recycler.ratio: 8
DEBUG i.n.u.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
INFO reactor - [id:0e3c272c, L:/192.168.55.77:52910 - R:www.naver.com/125.209.222.142:443] ACTIVE
see that. I don't want connect twice and don't spend more time to connect.
just fail if can not make connect.
Can I avoid this situation?
This is managing under the issue.
https://github.com/reactor/reactor-netty/issues/1822

Why so many connections are used by Spring reactive with Mongo

I got the exception 'MongoWaitQueueFullException' and I realize the number of connections that my application is using. I use the default configuration of Spring boot (2.2.7.RELEASE) with reactive MongoDB (4.2.8). Transactions are used.
Even when running an integration test that basically creates a bit more than 200 elements then groups them (200 groups). 10 connections are used. When this algorithm is executed over a real data-set, this exception is thrown. The default limit of the waiting queue (500) was reached. This does not make the application scalable.
My question is: is there a way to design a reactive application that helps to reduce the number of connections?
This is the output of my test. Basically, it scans all translations of bundle files and them group them per translation key. An element is persisted per translation key.
return Flux
.fromIterable(bundleFile.getFiles())
.map(ScannedBundleFileEntry::getLocale)
.flatMap(locale ->
handler
.scanTranslations(bundleFileEntity.toLocation(), locale, context)
.index()
.map(indexedTranslation ->
createTranslation(
workspaceEntity,
bundleFileEntity,
locale.getId(),
indexedTranslation.getT1(), // index
indexedTranslation.getT2().getKey(), // bundle key
indexedTranslation.getT2().getValue() // translation
)
)
.flatMap(bundleKeyTemporaryRepository::save)
)
.thenMany(groupIntoBundleKeys(bundleFileEntity))
.then(bundleKeyTemporaryRepository.deleteByBundleFile(bundleFileEntity.getId()))
.then(Mono.just(bundleFileEntity));
The grouping function:
private Flux<BundleKeyEntity> groupIntoBundleKeys(BundleFileEntity bundleFile) {
return this
.findBundleKeys(bundleFile)
.groupBy(BundleKeyGroupKey::new)
.flatMap(bundleKeyGroup ->
bundleKeyGroup
.collectList()
.map(bundleKeys -> {
final BundleKeyGroupKey key = bundleKeyGroup.key();
final BundleKeyEntity entity = new BundleKeyEntity(key.getWorkspace(), key.getBundleFile(), key.getKey());
bundleKeys.forEach(entity::mergeInto);
return entity;
})
)
.flatMap(bundleKeyEntityRepository::save);
}
The test output:
560 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Neither #ContextConfiguration nor #ContextHierarchy found for test class [be.sgerard.i18n.controller.TranslationControllerTest], using SpringBootContextLoader
569 [main] INFO o.s.t.c.s.AbstractContextLoader - Could not detect default resource locations for test class [be.sgerard.i18n.controller.TranslationControllerTest]: no resource found for suffixes {-context.xml, Context.groovy}.
870 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Loaded default TestExecutionListener class names from location [META-INF/spring.factories]: [org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener, org.springframework.test.context.web.ServletTestExecutionListener, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener, org.springframework.test.context.support.DependencyInjectionTestExecutionListener, org.springframework.test.context.support.DirtiesContextTestExecutionListener, org.springframework.test.context.transaction.TransactionalTestExecutionListener, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener, org.springframework.test.context.event.EventPublishingTestExecutionListener, org.springframework.security.test.context.support.WithSecurityContextTestExecutionListener, org.springframework.security.test.context.support.ReactorContextTestExecutionListener]
897 [main] INFO o.s.b.t.c.SpringBootTestContextBootstrapper - Using TestExecutionListeners: [org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#4372b9b6, org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener#232a7d73, org.springframework.boot.test.autoconfigure.SpringBootDependencyInjectionTestExecutionListener#4b41e4dd, org.springframework.test.context.support.DirtiesContextTestExecutionListener#22ffa91a, org.springframework.test.context.transaction.TransactionalTestExecutionListener#74960bfa, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#42721fe, org.springframework.test.context.event.EventPublishingTestExecutionListener#40844aab, org.springframework.security.test.context.support.WithSecurityContextTestExecutionListener#1f6c9cd8, org.springframework.security.test.context.support.ReactorContextTestExecutionListener#5b619d14, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener#66746f57, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener#447a020, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener#7f36662c, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener#28e8dde3, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener#6d23017e]
1551 [background-preinit] INFO o.h.v.i.x.c.ValidationBootstrapParameters - HV000006: Using org.hibernate.validator.HibernateValidator as validation provider.
1677 [main] INFO b.s.i.c.TranslationControllerTest - Starting TranslationControllerTest on sgerard with PID 538 (started by sgerard in /home/sgerard/sandboxes/github-oauth/server)
1678 [main] INFO b.s.i.c.TranslationControllerTest - The following profiles are active: test
3250 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Bootstrapping Spring Data Reactive MongoDB repositories in DEFAULT mode.
3747 [main] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 493ms. Found 9 Reactive MongoDB repository interfaces.
5143 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.security.config.annotation.method.configuration.ReactiveMethodSecurityConfiguration' of type [org.springframework.security.config.annotation.method.configuration.ReactiveMethodSecurityConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
5719 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
5996 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d46', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:4337}] to localhost:27017
6010 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d46', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 8]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=12207332, setName='rs0', canonicalAddress=4802c4aff450:27017, hosts=[4802c4aff450:27017], passives=[], arbiters=[], primary='4802c4aff450:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000013, setVersion=1, lastWriteDate=Sun Aug 23 12:46:30 CEST 2020, lastUpdateTimeNanos=384505436362981}
6019 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
6040 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d47', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:4338}] to localhost:27017
6042 [cluster-ClusterId{value='5f42490f1c60f43aff9d7d47', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 8]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1727974, setName='rs0', canonicalAddress=4802c4aff450:27017, hosts=[4802c4aff450:27017], passives=[], arbiters=[], primary='4802c4aff450:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000013, setVersion=1, lastWriteDate=Sun Aug 23 12:46:30 CEST 2020, lastUpdateTimeNanos=384505468960066}
7102 [nioEventLoopGroup-2-2] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:3, serverValue:4339}] to localhost:27017
11078 [main] INFO o.s.b.a.e.web.EndpointLinksResolver - Exposing 1 endpoint(s) beneath base path ''
11158 [main] INFO o.h.v.i.x.c.ValidationBootstrapParameters - HV000006: Using org.hibernate.validator.HibernateValidator as validation provider.
11720 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:4, serverValue:4340}] to localhost:27017
12084 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService 'taskScheduler'
12161 [main] INFO b.s.i.c.TranslationControllerTest - Started TranslationControllerTest in 11.157 seconds (JVM running for 13.532)
20381 [nioEventLoopGroup-2-3] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:4341}] to localhost:27017
20408 [nioEventLoopGroup-2-2] INFO b.s.i.s.w.WorkspaceManagerImpl - Synchronize, there is no workspace for the branch [master], let's create it.
20416 [nioEventLoopGroup-2-3] INFO b.s.i.s.w.WorkspaceManagerImpl - The workspace [master] alias [e3cea374-0d37-4c57-bdbf-8bd14d279c12] has been created.
20421 [nioEventLoopGroup-2-3] INFO b.s.i.s.w.WorkspaceManagerImpl - Initializing workspace [master] alias [e3cea374-0d37-4c57-bdbf-8bd14d279c12].
20525 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/main/resources/i18n] named [exception] with 2 file(s).
20812 [nioEventLoopGroup-2-4] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:6, serverValue:4342}] to localhost:27017
21167 [nioEventLoopGroup-2-8] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:10, serverValue:4345}] to localhost:27017
21167 [nioEventLoopGroup-2-6] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:8, serverValue:4344}] to localhost:27017
21393 [nioEventLoopGroup-2-5] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:7, serverValue:4343}] to localhost:27017
21398 [nioEventLoopGroup-2-7] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:9, serverValue:4346}] to localhost:27017
21442 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/main/resources/i18n] named [validation] with 2 file(s).
21503 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [server/src/test/resources/be/sgerard/i18n/service/i18n/file] named [file] with 2 file(s).
21621 [nioEventLoopGroup-2-2] INFO b.s.i.s.i18n.TranslationManagerImpl - A bundle file has been found located in [front/src/main/web/src/assets/i18n] named [i18n] with 2 file(s).
22745 [SpringContextShutdownHook] INFO o.s.s.c.ThreadPoolTaskScheduler - Shutting down ExecutorService 'taskScheduler'
22763 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:4, serverValue:4340}] to localhost:27017 because the pool has been closed.
22766 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:9, serverValue:4346}] to localhost:27017 because the pool has been closed.
22767 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:6, serverValue:4342}] to localhost:27017 because the pool has been closed.
22768 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:8, serverValue:4344}] to localhost:27017 because the pool has been closed.
22768 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:5, serverValue:4341}] to localhost:27017 because the pool has been closed.
22769 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:10, serverValue:4345}] to localhost:27017 because the pool has been closed.
22770 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:7, serverValue:4343}] to localhost:27017 because the pool has been closed.
22776 [SpringContextShutdownHook] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:3, serverValue:4339}] to localhost:27017 because the pool has been closed.
Process finished with exit code 0
Spring Reactive is asynchronous. Imagine you have 3 items in your dataset. It opens a connection for the save of the first item. But it won't wait for it to finish and use for the second save. Instead, it opens a second connection as soon as possible. Thus you'll end up overloading all the possible connections in the pool.

Problem in Flink UI on Mesos cluster with two slave nodes

I have four physical nodes with docker installed on each of them. I configured Mesos,Flink,Zookeeper,Hadoop and Marathon on docker of each one. I had already had three nodes,one slave and two masters, that I had run Flink on Marathon and its UI had been run without any problems. After that, I changed the cluster,two masters and two slaves. I added this Json file in Marathon, it was ran, but Flink UI was not shown in both slave nodes. The error is in following.
{
"id": "flink",
"cmd": "/home/flink-1.7.2/bin/mesos-appmaster.sh -Djobmanager.heap.mb=1024 -Djobmanager.rpc.port=6123 -Drest.port=8081 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=1024 -Dtaskmanager.numberOfTaskSlots=2 -Dparallelism.default=2 -Dmesos.resourcemanager.tasks.cpus=1",
"cpus": 1.0,
"mem": 1024,
"instances": 2
}
Error:
Service temporarily unavailable due to an ongoing leader election. Please refresh
I cleared Zookeeper contents with this commands:
/home/zookeeper-3.4.14/bin/zkCleanup.sh /var/lib/zookeeper/data/ -n 10
rm -rf /var/lib/zookeeper/data/version-2
rm /var/lib/zookeeper/data/zookeeper_server.pid
Also, I ran this command and delete Flink contents in Zookeeper:
/home/zookeeper-3.4.14/bin/zkCli.sh
delete /flink/default/leader/....
But still one of Flink UI has problem.
I have configured Flink high availability like this:
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: 0.0.0.0:2181,10.32.0.3:2181,10.32.0.4:2181,10.32.0.5:2181
fs.hdfs.hadoopconf: /opt/hadoop/etc/hadoop
fs.hdfs.hdfssite: /opt/hadoop/etc/hadoop/hdfs-site.xml
recovery.zookeeper.path.mesos-workers: /mesos-workers
env.java.home: /opt/java
mesos.master: 10.32.0.2:5050,10.32.0.3:5050
Because I used Mesos cluster, I did not change any thing in flink-conf.yaml.
This is part of slave log which has error:
- Remote connection to [null] failed with java.net.ConnectException:
Connection refused: localhost/127.0.0.1:37797
2019-07-03 07:22:42,922 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink#localhost:37797] has failed, address is now gated for [50] ms.
Reason: [Association failed with [akka.tcp://flink#localhost:37797]]
Caused by: [Connection refused: localhost/127.0.0.1:37797]
2019-07-03 07:22:43,003 WARN akka.remote.transport.netty.NettyTransport
- Remote connection to [null] failed with java.net.ConnectException:
Connection refused: localhost/127.0.0.1:37797
2019-07-03 07:22:43,004 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink#localhost:37797]
has failed, address is now gated for [50] ms.
Reason: [Association failed with [akka.tcp://flink#localhost:37797]]
Caused by: [Connection refused: localhost/127.0.0.1:37797]
2019-07-03 07:22:43,072 WARN akka.remote.transport.netty.NettyTransport
- Remote connection to [null] failed with java.net.ConnectException:
Connection refused: localhost/127.0.0.1:37797
2019-07-03 07:22:43,073 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink#localhost:37797]
has failed, address is now gated for [50] ms.
Reason: [Association failed with [akka.tcp://flink#localhost:37797]]
Caused by: [Connection refused: localhost/127.0.0.1:37797]
2019-07-03 07:23:45,891 WARN
org.apache.flink.runtime.webmonitor.retriever.impl.RpcGatewayRetriever
- Error while retrieving the leader gateway. Retrying to connect to
akka.tcp://flink#localhost:37797/user/dispatcher.
This is Zookeeper log for the node that has the error in Flink UI:
2019-07-03 09:43:33,425 [myid:] - INFO [main:QuorumPeerConfig#136] - Reading configuration from: /home/zookeeper-3.4.14/bin/../conf/zoo.cfg
2019-07-03 09:43:33,434 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0
2019-07-03 09:43:33,435 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.3 to address: /10.32.0.3
2019-07-03 09:43:33,435 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.2 to address: /10.32.0.2
2019-07-03 09:43:33,435 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.5 to address: /10.32.0.5
2019-07-03 09:43:33,435 [myid:] - WARN [main:QuorumPeerConfig#354] - Non-optimial configuration, consider an odd number of servers.
2019-07-03 09:43:33,436 [myid:] - INFO [main:QuorumPeerConfig#398] - Defaulting to majority quorums
2019-07-03 09:43:33,438 [myid:3] - INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
2019-07-03 09:43:33,438 [myid:3] - INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 0
2019-07-03 09:43:33,438 [myid:3] - INFO [main:DatadirCleanupManager#101] - Purge task is not scheduled.
2019-07-03 09:43:33,445 [myid:3] - INFO [main:QuorumPeerMain#130] - Starting quorum peer
2019-07-03 09:43:33,450 [myid:3] - INFO [main:ServerCnxnFactory#117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2019-07-03 09:43:33,452 [myid:3] - INFO [main:NIOServerCnxnFactory#89] - binding to port 0.0.0.0/0.0.0.0:2181
2019-07-03 09:43:33,458 [myid:3] - INFO [main:QuorumPeer#1159] - tickTime set to 2000
2019-07-03 09:43:33,458 [myid:3] - INFO [main:QuorumPeer#1205] - initLimit set to 10
2019-07-03 09:43:33,458 [myid:3] - INFO [main:QuorumPeer#1179] - minSessionTimeout set to -1
2019-07-03 09:43:33,459 [myid:3] - INFO [main:QuorumPeer#1190] - maxSessionTimeout set to -1
2019-07-03 09:43:33,464 [myid:3] - INFO [main:QuorumPeer#1470] - QuorumPeer communication is not secured!
2019-07-03 09:43:33,464 [myid:3] - INFO [main:QuorumPeer#1499] - quorum.cnxn.threads.size set to 20
2019-07-03 09:43:33,465 [myid:3] - INFO [main:QuorumPeer#669] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2019-07-03 09:43:33,519 [myid:3] - INFO [main:QuorumPeer#684] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2019-07-03 09:43:33,566 [myid:3] - INFO [ListenerThread:QuorumCnxManager$Listener#736] - My election bind port: /0.0.0.0:3888
2019-07-03 09:43:33,574 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer#910] - LOOKING
2019-07-03 09:43:33,575 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:FastLeaderElection#813] - New election. My id = 3, proposed zxid=0x0
2019-07-03 09:43:33,581 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,581 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LEADING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,581 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,582 [myid:3] - INFO [WorkerSender[myid=3]:QuorumCnxManager#347] - Have smaller server identifier, so dropping the connection: (4, 3)
2019-07-03 09:43:33,583 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 3 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,583 [myid:3] - INFO [WorkerSender[myid=3]:QuorumCnxManager#347] - Have smaller server identifier, so dropping the connection: (4, 3)
2019-07-03 09:43:33,583 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LEADING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,584 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,585 [myid:3] - INFO [/0.0.0.0:3888:QuorumCnxManager$Listener#743] - Received connection request /10.32.0.5:42182
2019-07-03 09:43:33,585 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,585 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,587 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 4 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,587 [myid:3] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker#1025] - Connection broken for id 4, my id = 3, error =
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1010)
2019-07-03 09:43:33,589 [myid:3] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker#1028] - Interrupting SendWorker
2019-07-03 09:43:33,588 [myid:3] - INFO [/0.0.0.0:3888:QuorumCnxManager$Listener#743] - Received connection request /10.32.0.5:42184
2019-07-03 09:43:33,589 [myid:3] - WARN [SendWorker:4:QuorumCnxManager$SendWorker#941] - Interrupted while waiting for message on queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1094)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:929)
2019-07-03 09:43:33,589 [myid:3] - WARN [SendWorker:4:QuorumCnxManager$SendWorker#951] - Send worker leaving thread
2019-07-03 09:43:33,590 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 4 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,590 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer#980] - FOLLOWING
2019-07-03 09:43:33,591 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 4 (n.sid), 0x3 (n.peerEpoch) FOLLOWING (my state)
2019-07-03 09:43:33,593 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Learner#86] - TCP NoDelay set to: true
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:host.name=629a802d822d
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.version=1.8.0_191
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.vendor=Oracle Corporation
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.class.path=/home/zookeeper-3.4.14/bin/../zookeeper-server/target/classes:/home/zookeeper-3.4.14/bin/../build/classes:/home/zookeeper-3.4.14/bin/../zookeeper-server/target/lib/*.jar:/home/zookeeper-3.4.14/bin/../build/lib/*.jar:/home/zookeeper-3.4.14/bin/../lib/slf4j-log4j12-1.7.25.jar:/home/zookeeper-3.4.14/bin/../lib/slf4j-api-1.7.25.jar:/home/zookeeper-3.4.14/bin/../lib/netty-3.10.6.Final.jar:/home/zookeeper-3.4.14/bin/../lib/log4j-1.2.17.jar:/home/zookeeper-3.4.14/bin/../lib/jline-0.9.94.jar:/home/zookeeper-3.4.14/bin/../lib/audience-annotations-0.5.0.jar:/home/zookeeper-3.4.14/bin/../zookeeper-3.4.14.jar:/home/zookeeper-3.4.14/bin/../zookeeper-server/src/main/resources/lib/*.jar:/home/zookeeper-3.4.14/bin/../conf:
2019-07-03 09:43:33,598 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.io.tmpdir=/tmp
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.compiler=<NA>
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:os.name=Linux
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:os.arch=amd64
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:os.version=4.18.0-21-generic
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:user.name=root
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:user.home=/root
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:user.dir=/
2019-07-03 09:43:33,599 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:ZooKeeperServer#174] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /var/lib/zookeeper/data/version-2 snapdir /var/lib/zookeeper/data/version-2
2019-07-03 09:43:33,600 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:Follower#65] - FOLLOWING - LEADER ELECTION TOOK - 25
2019-07-03 09:43:33,601 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.2 to address: /10.32.0.2
2019-07-03 09:43:33,637 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Learner#336] - Getting a snapshot from leader 0x300000000
2019-07-03 09:43:33,644 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:FileTxnSnapLog#301] - Snapshotting: 0x300000000 to /var/lib/zookeeper/data/version-2/snapshot.300000000
2019-07-03 09:44:24,320 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55744
2019-07-03 09:44:24,324 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55744
2019-07-03 09:44:24,327 [myid:3] - WARN
[QuorumPeer[myid=3]/0.0.0.0:2181:Follower#119] - Got zxid 0x300000001 expected 0x1
2019-07-03 09:44:24,327 [myid:3] - INFO [SyncThread:3:FileTxnLog#216] - Creating new log file: log.300000001
2019-07-03 09:44:24,384 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860000 with negotiated timeout 10000 for client /150.20.11.157:55744
2019-07-03 09:44:24,892 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55746
2019-07-03 09:44:24,892 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55746
2019-07-03 09:44:24,908 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860001 with negotiated timeout 10000 for client /150.20.11.157:55746
2019-07-03 09:44:26,410 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55748
2019-07-03 09:44:26,411 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#903] - Connection request from old client /150.20.11.157:55748; will be dropped if server is in r-o mode
2019-07-03 09:44:26,411 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55748
2019-07-03 09:44:26,422 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860002 with negotiated timeout 10000 for client /150.20.11.157:55748
2019-07-03 09:45:41,553 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /150.20.11.157:55746 which had sessionid 0x300393be5860001
2019-07-03 09:45:41,567 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /150.20.11.157:55744 which had sessionid 0x300393be5860000
2019-07-03 09:45:41,597 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#376] - Unable to read additional data from client sessionid 0x300393be5860002, likely client has closed socket
2019-07-03 09:45:41,597 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /150.20.11.157:55748 which had sessionid 0x300393be5860002
2019-07-03 09:46:20,896 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /10.32.0.5:45998
2019-07-03 09:46:20,901 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /10.32.0.5:45998
2019-07-03 09:46:20,916 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860003 with negotiated timeout 40000 for client /10.32.0.5:45998
2019-07-03 09:46:43,827 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55864
2019-07-03 09:46:43,830 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55864
2019-07-03 09:46:43,856 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860004 with negotiated timeout 10000 for client /150.20.11.157:55864
2019-07-03 09:46:44,336 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] -
Accepted socket connection from /150.20.11.157:55866
2019-07-03 09:46:44,336 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55866
2019-07-03 09:46:44,348 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694]
- Established session 0x300393be5860005 with negotiated timeout 10000 for client /150.20.11.157:55866
Would you please guide me how to use both Mesos slaves to run Flink platform?
Any help would be really appreciated.

newAPIHadoopRDD reading from HBase consuming too much time (Main cause is Dns.reverseDns)

Recently, When I was testing my cluster with Spark and HBase. I was using newAPIHadoopRDD to read records from HBase table. I found that newAPIHadoopRDD was too slow, and the time was proportional to the number of Region Servers.
The spark debug(opened for test) logs below shows the procedure:
17/03/02 22:00:30 DEBUG AbstractRpcClient: Use SIMPLE authentication for service ClientService, sasl=false
17/03/02 22:00:30 DEBUG AbstractRpcClient: Connecting to slave111/192.168.10.111:16020
17/03/02 22:00:30 DEBUG ClientCnxn: Reading reply sessionid:0x15a8de8a86f0444, packet:: clientPath:null serverPath:null finished:false header:: 5,3 replyHeader:: 5,116079898,0 request:: '/hbase,F response:: s{116070329,116070329,1488462020202,1488462020202,0,16,0,0,0,16,116070652}
17/03/02 22:00:30 DEBUG ClientCnxn: Reading reply sessionid:0x15a8de8a86f0444, packet:: clientPath:null serverPath:null finished:false header:: 6,4 replyHeader:: 6,116079898,0 request:: '/hbase/master,F response:: #ffffffff000146d61737465723a3136303030fffffff4ffffffa23affffffc8ffffffb6ffffffb1ffffffc21a50425546a12a66d617374657210ffffff807d18ffffffcffffffff4fffffffffffffff9ffffffa82b10018ffffff8a7d,s{116070348,116070348,1488462021202,1488462021202,0,0,0,97546372339663909,54,0,116070348}
17/03/02 22:00:30 DEBUG AbstractRpcClient: Use SIMPLE authentication for service MasterService, sasl=false
17/03/02 22:00:30 DEBUG AbstractRpcClient: Connecting to master/192.168.10.100:16000
17/03/02 22:00:30 DEBUG RegionSizeCalculator: Region tt,3,1488442069431.21d34666d310df3f180b2dba093d910d. has size 0
17/03/02 22:00:30 DEBUG RegionSizeCalculator: Region tt,,1488442069431.cb8696957957f824f1a16210768bf197. has size 0
17/03/02 22:00:30 DEBUG RegionSizeCalculator: Region tt,1,1488442069431.274ddaa4abb34f0408cac0f33107529c. has size 0
17/03/02 22:00:30 DEBUG RegionSizeCalculator: Region tt,2,1488442069431.05dd84aacb7f2587e325c8baf4c27613. has size 0
17/03/02 22:00:30 DEBUG RegionSizeCalculator: Region sizes calculated
17/03/02 22:00:38 DEBUG Client: IPC Client (480943798) connection to master/192.168.10.100:9000 from hadoop: closed
17/03/02 22:00:38 DEBUG Client: IPC Client (480943798) connection to master/192.168.10.100:9000 from hadoop: stopped, remaining connections 0
17/03/02 22:00:43 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:00:56 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:01:00 DEBUG TableInputFormatBase: getSplits: split -> 0 -> HBase table split(table name: tt, scan: , start row: , end row: 1, region location: slave104)
17/03/02 22:01:10 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:01:23 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:01:30 DEBUG TableInputFormatBase: getSplits: split -> 1 -> HBase table split(table name: tt, scan: , start row: 1, end row: 2, region location: slave102)
17/03/02 22:01:37 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:01:50 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:02:00 DEBUG TableInputFormatBase: getSplits: split -> 2 -> HBase table split(table name: tt, scan: , start row: 2, end row: 3, region location: slave112)
17/03/02 22:02:03 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:02:17 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:02:30 DEBUG ClientCnxn: Got ping response for sessionid: 0x15a8de8a86f0444 after 0ms
17/03/02 22:02:30 DEBUG TableInputFormatBase: getSplits: split -> 3 -> HBase table split(table name: tt, scan: , start row: 3, end row: , region location: slave108)
17/03/02 22:02:30 INFO ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
17/03/02 22:02:30 INFO ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15a8de8a86f0444
17/03/02 22:02:30 DEBUG ZooKeeper: Closing session: 0x15a8de8a86f0444
17/03/02 22:02:30 DEBUG ClientCnxn: Closing client for session: 0x15a8de8a86f0444
17/03/02 22:02:30 DEBUG ClientCnxn: Reading reply sessionid:0x15a8de8a86f0444, packet:: clientPath:null serverPath:null finished:false header:: 7,-11 replyHeader:: 7,116080795,0 request:: null response:: null
17/03/02 22:02:30 DEBUG ClientCnxn: Disconnecting client for session: 0x15a8de8a86f0444
17/03/02 22:02:30 INFO ZooKeeper: Session: 0x15a8de8a86f0444 closed
17/03/02 22:02:30 INFO ClientCnxn: EventThread shut down
17/03/02 22:02:30 DEBUG AbstractRpcClient: Stopping rpc client
17/03/02 22:02:30 DEBUG ClientCnxn: An exception was thrown while closing send thread for session 0x15a8de8a86f0444 : Unable to read additional data from server sessionid 0x15a8de8a86f0444, likely server has closed socket
17/03/02 22:02:30 DEBUG ClosureCleaner: +++ Cleaning closure <function1> (org.apache.spark.rdd.RDD$$anonfun$count$1) +++
I'm using Spark 2.1.0, HBase 1.1.2. It took too much time for getSplits operation. The region server numbers was tested from one to four, and it took 30 seconds for each region server. HBase table contains no records (just for test).
Is this normal? and does anyone suffer the same problem as me?
The test code shows below:
Configuration hconf = HBaseConfiguration.create();
hconf.set(TableInputFormat.INPUT_TABLE, GLOBAL.TABLE_NAME);
hconf.set("hbase.zookeeper.quorum", "192.168.10.100");
hconf.set("hbase.zookeeper.property.clientPort", "2181");
Scan scan = new Scan();
JavaPairRDD<ImmutableBytesWritable, Result> results
= sc.newAPIHadoopRDD(hconf, TableInputFormat.class, ImmutableBytesWritable.class, Result.class);
long cnt = results.count();
System.out.println(cnt);
EDIT
After debugging with HBase source code, I found the cause of the slow speed. The reverse DNS operation from TableInputFormatBase.java is the culprit.
ipAddressString = DNS.reverseDns(ipAddress, null);
How to solve this problem now? Can I add some dns-ip pair in HBase configuration?
I got the result below when using nslookup to reverse find 192.168.10.100.
;; connection timed out; trying next origin
;; connection timed out; no servers could be reached
so, I executed the cmds below,
sudo iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -o em4 -j MASQUERADE
sudo sysctl -w net.ipv4.ip_forward=1
sudo route add default gw 'mygatway' em4
then, the problem is gone.

Strange logs when vaadin page opened in osgi

I have following logs when opening a vaadin page.
The strange logs stop when I close my webpage.
[qtp948395645-39] DEBUG org.eclipse.jetty.http.HttpParser - filled 447/447
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.server.Server - REQUEST / on AsyncHttpConnection#5444c658,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-5,l=48,c=0},r=11
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.server.handler.ContextHandler - scope null||/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.server.handler.ContextHandler - context=||/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.server.session - Got Session ID 1mddljaq8cpy11l0btqfs6p34s from cookie
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.server.session - sessionManager=org.eclipse.jetty.server.session.HashSessionManager#19868320
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.server.session - session=org.eclipse.jetty.server.session.HashedSession:1mddljaq8cpy11l0btqfs6p34s#1806836909
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.servlet.ServletHandler - servlet ||/ -> org.apache.felix.http.base.internal.DispatcherServlet-158d255c
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.servlet.ServletHandler - chain=null
[qtp948395645-39 - /] DEBUG org.eclipse.jetty.server.Server - RESPONSE / 200 handled=true
[qtp948395645-39] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Enabled read interest SCEP#11dfd090{l(/10.221.137.111:56461)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#5444c658,g=HttpGenerator{s=4,h=0,b=0,c=-1},p=HttpParser{s=0,l=48,c=0},r=11}
[qtp948395645-39] DEBUG org.eclipse.jetty.http.HttpParser - filled 0/0
[qtp948395645-36] DEBUG org.eclipse.jetty.http.HttpParser - filled 474/474
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.server.Server - REQUEST /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js on AsyncHttpConnection#5444c658,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-5,l=48,c=0},r=12
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.server.handler.ContextHandler - scope null||/VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.server.handler.ContextHandler - context=||/VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.server.session - Got Session ID 1mddljaq8cpy11l0btqfs6p34s from cookie
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.server.session - sessionManager=org.eclipse.jetty.server.session.HashSessionManager#19868320
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.server.session - session=org.eclipse.jetty.server.session.HashedSession:1mddljaq8cpy11l0btqfs6p34s#1806836909
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.servlet.ServletHandler - servlet ||/VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js -> org.apache.felix.http.base.internal.DispatcherServlet-158d255c
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.servlet.ServletHandler - chain=null
[qtp948395645-36 - /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js?1443457921853] DEBUG org.eclipse.jetty.server.Server - RESPONSE /VAADIN/widgetsets/com.vaadin.DefaultWidgetSet/com.vaadin.DefaultWidgetSet.nocache.js 200 handled=true
[qtp948395645-36] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Enabled read interest SCEP#11dfd090{l(/10.221.137.111:56461)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#5444c658,g=HttpGenerator{s=4,h=0,b=0,c=-1},p=HttpParser{s=0,l=48,c=0},r=12}
[qtp948395645-40] DEBUG org.eclipse.jetty.http.HttpParser - filled 731/731
[qtp948395645-36] DEBUG org.eclipse.jetty.http.HttpParser - filled 0/0
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.server.Server - REQUEST / on AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=2,l=48,c=246},r=3
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.server.handler.ContextHandler - scope null||/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.server.handler.ContextHandler - context=||/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.server.session - Got Session ID 1mddljaq8cpy11l0btqfs6p34s from cookie
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.server.session - sessionManager=org.eclipse.jetty.server.session.HashSessionManager#19868320
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.server.session - session=org.eclipse.jetty.server.session.HashedSession:1mddljaq8cpy11l0btqfs6p34s#1806836909
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.servlet.ServletHandler - servlet ||/ -> org.apache.felix.http.base.internal.DispatcherServlet-158d255c
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.servlet.ServletHandler - chain=null
[qtp948395645-40 - /?v-1443457921854] INFO com.bekaert.handling.ui.core - Rebuilding session from cookie for user 'admin'
[qtp948395645-40 - /?v-1443457921854] WARN com.bekaert.handling.ui.core.main.ErrorView - Entered in error view:
[qtp948395645-40 - /?v-1443457921854] DEBUG org.eclipse.jetty.server.Server - RESPONSE / 200 handled=true
[qtp948395645-40] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Enabled read interest SCEP#61bf045{l(/10.221.137.111:56462)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=4,h=0,b=0,c=-1},p=HttpParser{s=0,l=48,c=246},r=3}
[qtp948395645-40] DEBUG org.eclipse.jetty.http.HttpParser - filled 0/0
[qtp948395645-42] DEBUG org.eclipse.jetty.http.HttpParser - filled 695/695
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.server.Server - REQUEST /UIDL/ on AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=2,l=48,c=200},r=4
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.server.handler.ContextHandler - scope null||/UIDL/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.server.handler.ContextHandler - context=||/UIDL/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.server.session - Got Session ID 1mddljaq8cpy11l0btqfs6p34s from cookie
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.server.session - sessionManager=org.eclipse.jetty.server.session.HashSessionManager#19868320
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.server.session - session=org.eclipse.jetty.server.session.HashedSession:1mddljaq8cpy11l0btqfs6p34s#1806836909
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.servlet.ServletHandler - servlet ||/UIDL/ -> org.apache.felix.http.base.internal.DispatcherServlet-158d255c
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.servlet.ServletHandler - chain=null
[qtp948395645-42 - /UIDL/?v-wsver=7.5.5&v-uiId=1] DEBUG org.eclipse.jetty.server.Server - RESPONSE /UIDL/ 200 handled=true
[qtp948395645-42] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Enabled read interest SCEP#61bf045{l(/10.221.137.111:56462)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=4,h=0,b=0,c=-1},p=HttpParser{s=0,l=48,c=200},r=4}
[qtp948395645-42] DEBUG org.eclipse.jetty.http.HttpParser - filled 0/0
[qtp948395645-37] DEBUG org.eclipse.jetty.http.HttpParser - filled 607/607
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.Server - REQUEST /UIDL/ on AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=2,l=48,c=126},r=5
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.handler.ContextHandler - scope null||/UIDL/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.handler.ContextHandler - context=||/UIDL/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.session - Got Session ID 1mddljaq8cpy11l0btqfs6p34s from cookie
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.session - sessionManager=org.eclipse.jetty.server.session.HashSessionManager#19868320
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.session - session=org.eclipse.jetty.server.session.HashedSession:1mddljaq8cpy11l0btqfs6p34s#1806836909
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.servlet.ServletHandler - servlet ||/UIDL/ -> org.apache.felix.http.base.internal.DispatcherServlet-158d255c
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.servlet.ServletHandler - chain=null
[qtp948395645-37 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.Server - RESPONSE /UIDL/ 200 handled=true
[qtp948395645-37] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Enabled read interest SCEP#61bf045{l(/10.221.137.111:56462)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=4,h=0,b=0,c=-1},p=HttpParser{s=0,l=48,c=126},r=5}
[qtp948395645-37] DEBUG org.eclipse.jetty.http.HttpParser - filled 0/0
[DefaultQuartzScheduler_Worker-10] DEBUG com.bekaert.handling.order.location.sap.connector.impl - Start search for new SAP orders
[qtp948395645-39] DEBUG org.eclipse.jetty.http.HttpParser - filled 607/607
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.Server - REQUEST /UIDL/ on AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=2,l=48,c=126},r=6
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.handler.ContextHandler - scope null||/UIDL/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.handler.ContextHandler - context=||/UIDL/ # o.e.j.s.ServletContextHandler{/,null}
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.session - Got Session ID 1mddljaq8cpy11l0btqfs6p34s from cookie
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.session - sessionManager=org.eclipse.jetty.server.session.HashSessionManager#19868320
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.session - session=org.eclipse.jetty.server.session.HashedSession:1mddljaq8cpy11l0btqfs6p34s#1806836909
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.servlet.ServletHandler - servlet ||/UIDL/ -> org.apache.felix.http.base.internal.DispatcherServlet-158d255c
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.servlet.ServletHandler - chain=null
[qtp948395645-39 - /UIDL/?v-uiId=1] DEBUG org.eclipse.jetty.server.Server - RESPONSE /UIDL/ 200 handled=true
[qtp948395645-39] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Enabled read interest SCEP#61bf045{l(/10.221.137.111:56462)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=4,h=0,b=0,c=-1},p=HttpParser{s=0,l=48,c=126},r=6}
[qtp948395645-39] DEBUG org.eclipse.jetty.http.HttpParser - filled 0/0
[qtp948395645-36] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - ishut SCEP#61bf045{l(/10.221.137.111:56462)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-14,l=0,c=-3},r=6}
[qtp948395645-36] DEBUG org.eclipse.jetty.http.HttpParser - filled -1/0
[qtp948395645-36] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Disabled read interest while writing response SCEP#61bf045{l(/10.221.137.111:56462)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=-3},r=6}
[qtp948395645-36] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - close SCEP#61bf045{l(/10.221.137.111:56462)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=-3},r=6}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.io.nio - destroyEndPoint SCEP#61bf045{l(null)<->r(0.0.0.0/0.0.0.0:80),s=0,open=false,ishut=true,oshut=true,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=-3},r=6}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.server.AbstractHttpConnection - closed AsyncHttpConnection#34d39e39,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=-3},r=6
[qtp948395645-37] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - ishut SCEP#5534412c{l(/10.221.137.111:56463)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#cf40a17,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-14,l=0,c=0},r=0}
[qtp948395645-37] DEBUG org.eclipse.jetty.http.HttpParser - filled -1/0
[qtp948395645-37] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Disabled read interest while writing response SCEP#5534412c{l(/10.221.137.111:56463)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#cf40a17,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-37] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - close SCEP#5534412c{l(/10.221.137.111:56463)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#cf40a17,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-42] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - ishut SCEP#1dcb12a9{l(/10.221.137.111:56464)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#6a011d1d,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-14,l=0,c=0},r=0}
[qtp948395645-42] DEBUG org.eclipse.jetty.http.HttpParser - filled -1/0
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.io.nio - destroyEndPoint SCEP#5534412c{l(null)<->r(0.0.0.0/0.0.0.0:80),s=0,open=false,ishut=true,oshut=true,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#cf40a17,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-42] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Disabled read interest while writing response SCEP#1dcb12a9{l(/10.221.137.111:56464)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#6a011d1d,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.server.AbstractHttpConnection - closed AsyncHttpConnection#cf40a17,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0
[qtp948395645-42] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - close SCEP#1dcb12a9{l(/10.221.137.111:56464)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#6a011d1d,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-42] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - ishut SCEP#6d491226{l(/10.221.137.111:56466)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#2c01d086,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-14,l=0,c=0},r=0}
[qtp948395645-42] DEBUG org.eclipse.jetty.http.HttpParser - filled -1/0
[qtp948395645-42] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Disabled read interest while writing response SCEP#6d491226{l(/10.221.137.111:56466)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#2c01d086,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-42] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - close SCEP#6d491226{l(/10.221.137.111:56466)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#2c01d086,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.io.nio - destroyEndPoint SCEP#1dcb12a9{l(null)<->r(0.0.0.0/0.0.0.0:80),s=0,open=false,ishut=true,oshut=true,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#6a011d1d,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.server.AbstractHttpConnection - closed AsyncHttpConnection#6a011d1d,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.io.nio - destroyEndPoint SCEP#6d491226{l(null)<->r(0.0.0.0/0.0.0.0:80),s=0,open=false,ishut=true,oshut=true,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#2c01d086,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.server.AbstractHttpConnection - closed AsyncHttpConnection#2c01d086,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0
[qtp948395645-37] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - ishut SCEP#387f2edf{l(/10.221.137.111:56465)<->r(/10.224.129.14:80),s=1,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#574b3210,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-14,l=0,c=0},r=0}
[qtp948395645-37] DEBUG org.eclipse.jetty.http.HttpParser - filled -1/0
[qtp948395645-37] DEBUG org.eclipse.jetty.server.AsyncHttpConnection - Disabled read interest while writing response SCEP#387f2edf{l(/10.221.137.111:56465)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1r}-{AsyncHttpConnection#574b3210,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-37] DEBUG org.eclipse.jetty.io.nio.ChannelEndPoint - close SCEP#387f2edf{l(/10.221.137.111:56465)<->r(/10.224.129.14:80),s=1,open=true,ishut=true,oshut=false,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#574b3210,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.io.nio - destroyEndPoint SCEP#387f2edf{l(null)<->r(0.0.0.0/0.0.0.0:80),s=0,open=false,ishut=true,oshut=true,rb=false,wb=false,w=true,i=1!}-{AsyncHttpConnection#574b3210,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0}
[qtp948395645-35 Selector0] DEBUG org.eclipse.jetty.server.AbstractHttpConnection - closed AsyncHttpConnection#574b3210,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=0,l=0,c=0},r=0
[Timer-2] INFO org.jinterop.dcom.core.JIComOxidRuntime - Running ClientPingTimerTask !
[Timer-2] INFO org.jinterop.dcom.core.JIComOxidRuntime - Within ClientPingTimerTask: holder.currentSetOIDs, current size of which is 2
[Timer-2] INFO org.jinterop.dcom.core.PingObject - Simple Ping going for setId: 00000: 00 00 00 05 65 74 29 12 |....et). |
[Timer-2] INFO org.jinterop -
Sending REQUEST
[Timer-2] INFO org.jinterop -
Recieved RESPONSE
[Timer-2] INFO org.jinterop.dcom.core.PingObject - Simple Ping Succeeded
[Timer-2] INFO org.jinterop.dcom.core.JIComOxidRuntime - Within ClientPingTimerTask: holder.seqNum 1
I don't know what all this means.
Also, it doesn't happen always. If I restart my osgi program, there is a 1/2 change I have this problem.

Resources