recently we are facing very strange behavior on our Elastic search cluster as cluster status change from green status to red status and after check we found some shards are unassigned and after check data node logs we found the following exception although we have enough memory on the server
Exception:
java.io.IOException: Cannot allocate memory
at sun.nio.ch.FileDispatcherImpl.force0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:82) ~[?:?]
at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:465) ~[?:?]
at org.elasticsearch.index.translog.TranslogWriter.syncUpTo(TranslogWriter.java:376) ~[elasticsearch-7.9.3.jar:7.9.3]
at org.elasticsearch.index.translog.Translog.ensureSynced(Translog.java:784) ~[elasticsearch-7.9.3.jar:7.9.3]
at org.elasticsearch.index.translog.Translog.ensureSynced(Translog.java:805) ~[elasticsearch-7.9.3.jar:7.9.3]
at org.elasticsearch.index.engine.InternalEngine.ensureTranslogSynced(InternalEngine.java:535) ~[elasticsearch-7.9.3.jar:7.9.3]
at org.elasticsearch.index.shard.IndexShard$5.write(IndexShard.java:3090) ~[elasticsearch-7.9.3.jar:7.9.3]```
Related
when im using dataflow with these steps:
- Read from bigquery
- Convert table row to json string
- Insert to elasticsearch (7.5.2)
It looks work great with ~100k records, but in real (8m records ~ 65gb) dataflow throw exception after 300k insertion.
Error message from worker: java.lang.RuntimeException: unexpected org.apache.beam.runners.dataflow.worker.util.common.worker.CachingShuffleBatchReader.read(CachingShuffleBatchReader.java:104) org.apache.beam.runners.dataflow.worker.util.common.worker.BatchingShuffleEntryReader$ShuffleReadIterator.fillEntries(BatchingShuffleEntryReader.java:125) org.apache.beam.runners.dataflow.worker.util.common.worker.BatchingShuffleEntryReader$ShuffleReadIterator.fillEntriesIfNeeded(BatchingShuffleEntryReader.java:119) org.apache.beam.runners.dataflow.worker.util.common.worker.BatchingShuffleEntryReader$ShuffleReadIterator.hasNext(BatchingShuffleEntryReader.java:84) org.apache.beam.runners.dataflow.worker.util.common.ForwardingReiterator.hasNext(ForwardingReiterator.java:63) org.apache.beam.runners.dataflow.worker.util.common.worker.GroupingShuffleEntryIterator.advance(GroupingShuffleEntryIterator.java:109) org.apache.beam.runners.dataflow.worker.GroupingShuffleReader$GroupingShuffleReaderIterator.advance(GroupingShuffleReader.java:272) org.apache.beam.runners.dataflow.worker.GroupingShuffleReader$GroupingShuffleReaderIterator.start(GroupingShuffleReader.java:266) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159) org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77) org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:411) org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:380) org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:305) org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140) org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120) org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107) java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.util.concurrent.ExecutionException: java.io.IOException: DEADLINE_EXCEEDED: (g)RPC timed out when customerdataworker-khanhn-02170116-o318-harness-8pqr talking to customerdataworker-khanhn-02170116-o318-harness-f8zt:12346. Server unresponsive (ping error: Deadline Exceeded, {"created":"#1581934551.886578453","description":"Deadline Exceeded","file":"third_party/grpc/src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}). Typically one can self manage this issue, please read: https://cloud.google.com/dataflow/docs/guides/common-errors#tsg-rpc-timeout org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:531) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:492) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:83) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:196) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2312) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958) org.apache.beam.runners.dataflow.worker.util.common.worker.CachingShuffleBatchReader.read(CachingShuffleBatchReader.java:101) ... 21 more Caused by: java.io.IOException: DEADLINE_EXCEEDED: (g)RPC timed out when customerdataworker-khanhn-02170116-o318-harness-8pqr talking to customerdataworker-khanhn-02170116-o318-harness-f8zt:12346. Server unresponsive (ping error: Deadline Exceeded, {"created":"#1581934551.886578453","description":"Deadline Exceeded","file":"third_party/grpc/src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}). Typically one can self manage this issue, please read: https://cloud.google.com/dataflow/docs/guides/common-errors#tsg-rpc-timeout org.apache.beam.runners.dataflow.worker.ApplianceShuffleReader.readIncludingPosition(Native Method) org.apache.beam.runners.dataflow.worker.ChunkingShuffleBatchReader.read(ChunkingShuffleBatchReader.java:58) org.apache.beam.runners.dataflow.worker.util.common.worker.CachingShuffleBatchReader$1.load(CachingShuffleBatchReader.java:70) org.apache.beam.runners.dataflow.worker.util.common.worker.CachingShuffleBatchReader$1.load(CachingShuffleBatchReader.java:66) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277) ... 27 more
I also use these configs as recommendations at (increase disk size, num of workers) : https://cloud.google.com/dataflow/docs/guides/common-errors#tsg-rpc-timeout but it seems still not working.
My current configs:
--runner=DataflowRunner \
--numWorkers=10 \
--maxNumWorkers=20 \
--diskSizeGb=150 \
--workerMachineType=n1-standard-1 \
--region=asia-east1" && \
---Update 1:
Log on stackdriverenter image description here
Thanks for the replies, I have solved the problem. The root cause is that VM instances created by dataflow have firewall rules, we should allow connections between those.
I have an AWS ElasticSearch t2.medium instance with 2 nodes running, and hardly any load on it. Still it is crashing all the time.
I see the following graph for the metric JVMMemoryPressure:
When I go to Kibana, I see the following error message:
Questions:
Do I interpret correctly that the machines only have 64 MB of memory available, instead of the 4 GB that should be associated with this instance type? Is there another place to verify the absolute amount of heap memory, instead of on Kibana only when it is going wrong?
If so, how can I change this behavior?
If this is normal, where can I look for possible causes of ElasticSearch crashing whenever the memory footprint reaches 100%. I have only very small load on the instance.
In the logging of the instance, I see a lot of warnings, e.g. the ones below. They don't provide any clue for where to start debugging the issue.
[2018-08-15T07:36:37,021][WARN ][r.suppressed ] path: __PATH__ params:
{}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [__PATH__ master];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.handleBlockExceptions(TransportBulkAction.java:387) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:273) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$2.onTimeout(TransportBulkAction.java:421) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:244) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:578) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.1.jar:6.0.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_172]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
or
[2018-08-15T07:36:37,691][WARN ][o.e.d.z.ZenDiscovery ] [U1DMgyE] not enough master nodes discovered during pinging (found [[Candidate{node={U1DMgyE}{U1DMgyE1Rn2gId2aRgRDtw}{F-tqTFGDRZaovQF8ILC44w}{__IP__}{__IP__}{__AMAZON_INTERNAL__, __AMAZON_INTERNAL__}, clusterStateVersion=207939}]], but needed [2]), pinging again
or
[2018-08-15T07:36:42,303][WARN ][o.e.t.n.Netty4Transport ] [U1DMgyE] write and flush on the network layer failed (channel: [id: 0x385d3b63, __PATH__ ! __PATH__])
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method) ~[?:1.8.0_172]
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51) ~[?:1.8.0_172]
at sun.nio.ch.IOUtil.write(IOUtil.java:148) ~[?:1.8.0_172]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504) ~[?:1.8.0_172]
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:432) ~[netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:856) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:368) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:638) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
I have learned that that number is incorrect. I don't know where it is coming from. To get the correct memory usage, one runs the following query:
GET "<es_url>:9200/_nodes/stats"
If you're looking for only memory usage, use GET /"<es_url>:9200/_cat/nodes?h=heap* - it gives a more readable response like below.
{
"payload": [
{
"heap.current": "4.1gb",
"heap.max": "15.9gb",
"heap.percent": "25"
},
{
"heap.current": "3.9gb",
"heap.max": "15.9gb",
"heap.percent": "24"
},
...
}
_nodes/stats is elaborate with all other details also, though.
I'm running a 5 node elasticsearch cluster (2 data nodes, 2 master nodes, 1 kibana).
I'm getting the following error when use the command
curl -X GET "192.168.107.75:9200/_cat/master?v"
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}
],"type":"master_not_discovered_exception","reason":null},"status":503}
I'm using the following command to run elastic
sudo systemctl start elasticsearch.service
This is the message I see in the logs:
[2018-05-28T21:02:22,074][WARN ][o.e.d.z.ZenDiscovery ] [node-master-1] not enough master nodes discovered during pinging (found [[Candidate{node={node-master-1}{kJKYkpdbTKmdIeq-RVnCAQ}{JGbXMxOXR0SyjCu746Zlwg}{192.168.107.75}{192.168.107.75:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-05-28T21:02:25,076][WARN ][o.e.d.z.ZenDiscovery ] [node-master-1] not enough master nodes discovered during pinging (found [[Candidate{node={node-master-1}{kJKYkpdbTKmdIeq-RVnCAQ}{JGbXMxOXR0SyjCu746Zlwg}{192.168.107.75}{192.168.107.75:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-05-28T21:02:28,077][WARN ][o.e.d.z.ZenDiscovery ] [node-master-1] not enough master nodes discovered during pinging (found [[Candidate{node={node-master-1}{kJKYkpdbTKmdIeq-RVnCAQ}{JGbXMxOXR0SyjCu746Zlwg}{192.168.107.75}{192.168.107.75:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-05-28T21:02:31,079][WARN ][o.e.d.z.ZenDiscovery ] [node-master-1] not enough master nodes discovered during pinging (found [[Candidate{node={node-master-1}{kJKYkpdbTKmdIeq-RVnCAQ}{JGbXMxOXR0SyjCu746Zlwg}{192.168.107.75}{192.168.107.75:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-05-28T21:02:34,081][WARN ][o.e.d.z.ZenDiscovery ] [node-master-1] not enough master nodes discovered during pinging (found [[Candidate{node={node-master-1}{kJKYkpdbTKmdIeq-RVnCAQ}{JGbXMxOXR0SyjCu746Zlwg}{192.168.107.75}{192.168.107.75:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-05-28T21:02:37,084][WARN ][o.e.d.z.ZenDiscovery ] [node-master-1] not enough master nodes discovered during pinging (found [[Candidate{node={node-master-1}{kJKYkpdbTKmdIeq-RVnCAQ}{JGbXMxOXR0SyjCu746Zlwg}{192.168.107.75}{192.168.107.75:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2018-05-28T21:02:40,090][WARN ][o.e.d.z.ZenDiscovery ] [node-master-1] failed to connect to master [{node-master-2}{_M4BTrFbQguT3PbY5d2_JA}{1rzJcDPSQ5OH2OZ_CnhR-g}{192.168.107.76}{192.168.107.76:9300}], retrying...
org.elasticsearch.transport.ConnectTransportException: [node-master-2][192.168.107.76:9300] connect_exception
at org.elasticsearch.transport.TcpChannel.awaitConnected(TcpChannel.java:165) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:616) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:513) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:331) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:318) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:515) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:483) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.discovery.zen.ZenDiscovery.access$2500(ZenDiscovery.java:90) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1253) [elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.4.jar:6.2.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_172]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
Caused by: io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No route to host: 192.168.107.76/192.168.107.76:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:?]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) ~[?:?]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
... 1 more
Caused by: java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:?]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) ~[?:?]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
... 1 more
In the ealsticsearch.yml file apart from the config for assigning different roles to nodes I'm using the following configuration:
cluster.name: test_cluster
network.host: 192.168.107.71
discovery.zen.ping.unicast.hosts: ["192.168.107.73", "192.168.107.74", "192.168.107.75", "192.168.107.76"]
#the above two configuration IPs change as per the node
discovery.zen.minimum_master_nodes: 2
The hosts are pingable and have access to each other.
Any help would be much appreciated.
I think the problem is quite clear, [node-master-2][192.168.107.76] either is not accessible from this host, or elastic process on [node-master-2] is down.
You can check if curl -XGET "192.168.107.76:9200" from this host has a valid answer.
Also elastic documents explicitly says:
It is recommended to avoid having only two master eligible nodes,
since a quorum of two is two. Therefore, a loss of either master
eligible node will result in an inoperable cluster.
This ElasticSearch install guide provides a guidance how to to fix master_not_discovered_exception exceptions. Basically you can get this error for several reasons:
Firewall rule is blocking communication
Master / Data host names cannot be resolved (won't be you case as you are using IP addresses)
Incorrect elasticsearch.yml configuration (e.g. master node is not configured as master node, or running on different port / IP address).
First and second item can easily checked with telnet (from master telnet to data node, and the other way around).
I have an Elastic MapReduce job which uses elasticsearch-hadoop via scalding-taps to transfer data from Amazon S3 to Amazon Elasticsearch Service. For a long time this job ran successfully. However, it has recently started failing with the following stack trace:
2016-03-02 07:28:34,003 FATAL [IPC Server handler 0 on 41019] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1456902751849_0012_m_000000_0 - exited : cascading.tuple.TupleException: unable to sink into output identifier: myindex/mytable
at cascading.tuple.TupleEntrySchemeCollector.collect(TupleEntrySchemeCollector.java:160)
at cascading.tuple.TupleEntryCollector.safeCollect(TupleEntryCollector.java:145)
at cascading.tuple.TupleEntryCollector.add(TupleEntryCollector.java:95)
at cascading.tuple.TupleEntrySchemeCollector.add(TupleEntrySchemeCollector.java:134)
at cascading.flow.stream.SinkStage.receive(SinkStage.java:90)
at cascading.flow.stream.SinkStage.receive(SinkStage.java
:37)
at cascading.flow.stream.FunctionEachStage$1.collect(FunctionEachStage.java:80)
at cascading.tuple.TupleEntryCollector.safeCollect(TupleEntryCollector.java:145)
at cascading.tuple.TupleEntryCollector.add(TupleEntryCollector.java:133)
at com.twitter.scalding.MapFunction.operate(Operations.scala:59)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:99)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:39)
at cascading.flow.stream.SourceStage.map(SourceStage.java:102)
at cascading.flow.stream.SourceStage.run(SourceStage.java:58)
at cascading.flow.hadoop.FlowMapper.run(FlowMapper.java:130)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: null
We have enabled the "es.nodes.wan.only" setting.
What could be causing this failure?
I am stumbling across this error while running the Hadoop Balancer via Namenode. Anytips on cracking this. The process is also blocking the current user and giving an Out of Memory error on issuing any other command.
14/05/09 11:30:05 WARN hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_-77290934_1] for 936 seconds. Will retry shortly ...
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "hadoop01.xx.xx.xx.xx.com/30.0.1.176"; destination host is: "hadoop01.xx.xx.xx.xx.com":8022;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763)
at org.apache.hadoop.ipc.Client.call(Client.java:1242)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.renewLease(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy10.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:458)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:649)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Couldn't set up IO streams
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:671)
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1291)
at org.apache.hadoop.ipc.Client.call(Client.java:1209)
... 15 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:664)
... 18 more
Once the number of threads created by Hadoop RPC reaches the (ulimit -u) on the node's number of processes, java will report it as an out-of-memory error.
Try increasing the maximum number of processes allowed, i.e. your ulimit -u value.