shard allocation failing after power outage elasticsearch - elasticsearch

[tag:failed to recover from translog]
[2016-08-25 11:23:12,410][WARN ][indices.cluster ] [Cobalt Man] [[fct_providers][4]] marking and sending shard failed due to [failed recovery]
[fct_providers][[fct_providers][4]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: [fct_providers][[fct_providers][4]] EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:177)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1509)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1493)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:966)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:938)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
... 5 more
Caused by: [fct_providers][[fct_providers][4]] EngineException[failed to recover from translog]; nested: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:240)
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:174)
... 11 more
Caused by: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.translog.TranslogReader.readSize(TranslogReader.java:102)
at org.elasticsearch.index.translog.TranslogReader.access$000(TranslogReader.java:46)
at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.readOperation(TranslogReader.java:294)
at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.next(TranslogReader.java:287)
at org.elasticsearch.index.translog.MultiSnapshot.next(MultiSnapshot.java:70)
at org.elasticsearch.index.shard.TranslogRecoveryPerformer.recoveryFromSnapshot(TranslogRecoveryPerformer.java:105)
at org.elasticsearch.index.shard.IndexShard$1.recoveryFromSnapshot(IndexShard.java:1578)
at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:238)
... 12 more
Caused by: java.io.EOFException: read past EOF. pos [1047332] length: [4] end: [1047332]
at org.elasticsearch.common.io.Channels.readFromFileChannelWithEofException(Channels.java:102)
at org.elasticsearch.index.translog.ImmutableTranslogReader.readBytes(ImmutableTranslogReader.java:84)
at org.elasticsearch.index.translog.TranslogReader.readSize(TranslogReader.java:91)
... 19 more

Related

Hive query failed with org.apache.hadoop.hive.ql.exec.tez.TezTask java.lang.OutOfMemoryError: Java heap space

I have a Hive Table in ORC format, where:
My table is partition by: Year + Month + Day.
Total file size in HDFS: 10TB.
Total records: 21 Billion records.
Total data nodes in HDP: 8
I used Zeppelin notebook to query the table using jdbc(hive)
SELECT `timestamp`, url FROM events where id='f9e43fc7b' ORDER BY `timestamp` DESC
However, the query runs into below error:
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask.
Vertex failed, vertexName=Map 1, vertexId=vertex_1591402457216_0009_2_01, diagnostics=[Vertex vertex_1591402457216_0009_2_01 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE,
Vertex Input: evt initializer failed, vertex=vertex_1591402457216_0009_2_01 [Map 1], java.lang.OutOfMemoryError: Java heap space
at java.util.regex.Matcher.<init>(Matcher.java:225)
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at org.apache.hadoop.hive.ql.io.AcidUtils$BucketMetaData.parse(AcidUtils.java:318)
at org.apache.hadoop.hive.ql.io.AcidUtils$BucketMetaData.parse(AcidUtils.java:332)
at org.apache.hadoop.hive.ql.io.AcidUtils.parseBucketId(AcidUtils.java:367)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.determineSplitStrategy(OrcInputFormat.java:2331)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.determineSplitStrategies(OrcInputFormat.java:2306)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1811)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1939)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:522)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:777)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1591402457216_0009_2_02, diagnostics=[Vertex received Kill in INITED state.,
Vertex vertex_1591402457216_0009_2_02 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:401)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:266)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:718)
at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:801)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:633)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I'm using HDP-3.0.1.0 with Ambari. Here's my current Hive config:
Tez Container Size: 3072 MB
HiveServer2 Heap Size: 4096 MB
Memory: 819.2 MB
Data per Reducer: 2042.9 MB
They are mostly the default values. Any suggestion on which to tweak for optimum performance?

Getting a Intermittent issue while running Delete or Update in Hive

I have a very strange behavior with my cluster, I am running Delete/Update statement in Hive cli which sometime works fine and some time fails below is the Command. Table is ACID enabled.
hive> delete from temptable where name='Jose';
Exception is as follows:
Status: Running (Executing on YARN cluster with App id application_15217_3223)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 FAILED -1 0 0 -1 0 0
Reducer 2 KILLED 2 0 0 2 0 0
--------------------------------------------------------------------------------
VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 0.23 s
--------------------------------------------------------------------------------
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_15217_3223_1_00, diagnostics=[Vertex vertex_15217_3223_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: offset initializer failed, vertex=vertex_1514368279217_3223_1_00 [Map 1], java.lang.RuntimeException: serious problem
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1273)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1300)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:307)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:409)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:155)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:273)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:266)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 1
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1268)
... 15 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSargColumnNames(OrcInputFormat.java:358)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.setSearchArgument(OrcInputFormat.java:392)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1011)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2000(OrcInputFormat.java:838)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:992)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:989)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:989)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:838)
... 4 more
]
Vertex killed, vertexName=Reducer 2, vertexId=vertex_15217_3223_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_15217_3223_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]
DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_15217_3223_1_00, diagnostics=[Vertex vertex_15217_3223_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: offset initializer failed, vertex=vertex_15217_3223_1_00 [Map 1], java.lang.RuntimeException: serious problem
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1273)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1300)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:307)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:409)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:155)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:273)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:266)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:266)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 1
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1268)
... 15 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSargColumnNames(OrcInputFormat.java:358)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.setSearchArgument(OrcInputFormat.java:392)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.callInternal(OrcInputFormat.java:1011)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.access$2000(OrcInputFormat.java:838)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:992)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator$1.run(OrcInputFormat.java:989)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:989)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:838)
... 4 more
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_15217_3223_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_15217_3223_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
Unable to find any Solution.
What is the cause, Do I need to enable something or do I need to verify any config or missing something in the cli?
You can try increasing the memory that is being used to run the query. Could be it is running out of memory allocation before it can finish the query. Also try using Tez, I find it normally runs faster... explanation between the two can be found here
https://community.hortonworks.com/questions/83394/difference-between-mr-and-tez.html
try setting these values(increase the memory based off of need) before you run your query
SET hive.tez.container.size=4096;
SET hive.tez.java.opts='-Xmx2000m';
SET hive.execution.engine=tez;
or these two if you're set on using mapreduce
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb
Set execution engine to MR and try. I will work as expected.

Why es cluster stop to work until i delete the old index?

In es document,it introduce that,If we restart Node 1,If Node 1 still has copies of the old shards, it will try to reuse them, copying over from the primary shard only the files that have changed in the meantime.
So I did an experiment.
Here are 5 nodes in my cluster,Primary shards 1 is saved in node 1,and replica shards 1 is saved in node 2.When i restart node 1 and node 2,Primary shards 1's state become UNASSIGNED,and replica shards 1's state become UNASSIGNED too,the health of the cluster become red,and the health never become green.And the cluster stop to work until i delete the old index.
Here is part of the master log.
[ERROR][marvel.agent ] [es10] background thread had an uncaught exception
ElasticsearchException[failed to flush exporter bulks]
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)
at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)
at java.lang.Thread.run(Thread.java:745)
Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]];
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:106)
... 3 more
Caused by: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]]
at org.elasticsearch.marvel.agent.exporter.local.LocalBulk.flush(LocalBulk.java:114)
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:101)
... 3 more
[2016-02-19 12:53:18,769][ERROR][marvel.agent ] [es10] background thread had an uncaught exception
ElasticsearchException[failed to flush exporter bulks]
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)
at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)
at java.lang.Thread.run(Thread.java:745)
Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]];
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:106)
... 3 more
Caused by: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]]
at org.elasticsearch.marvel.agent.exporter.local.LocalBulk.flush(LocalBulk.java:114)
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:101)
... 3 more

#query elasticsearch spring data to get max id

I'm trying to get max and min id by #query elasticsearch spring data ,so I use the following code but I get theis error
No query registered for [aggs]];
#Query(value="{\"aggs\":{\"max_id\":{\"max\":{\"field\":\"id\"}}}}")
Segment findMaxId();
stack trace
org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to
execute phase [dfs], all shards failed; shardFailures
{[QvVc1wxqRoWtkUhGSDULCg][segment][0]: SearchParseException[[segment][0]:
from[0],size[10]: Parse Failure [Failed to parse source
[{"from":0,"size":10,"query_binary":
"eyJhZ2dzIjp7Im1heF9pZCI6eyJtYXgiOnsiZmllb
GQiOiJpZCJ9fX19"}]]]; nested: QueryParsingException[[segment] No query
registered for [aggs]]; }{[QvVc1wxqRoWtkUhGSDULCg][segment][1]:
SearchParseException[[segment][1]: from[0],size[10]: Parse Failure
[Failed to parse source [{"from":0,"size":10,"query_binary":
"eyJhZ2dzIjp7Im1heF9pZCI6eyJtYXgiOnsiZmllbGQiOiJpZCJ9fX19"}]]]; nested:
QueryParsingException[[segment] No query registered for [aggs]];
}{[QvVc1wxqRoWtkUhGSDULCg][segment][2]:
SearchParseException[[segment][2]: from[0],size[10]: Parse Failure
[Failed to parse source

How to export data from orc file format table into normal text file format table?

I have hive table which is ORC file format table with bucketing. I have huge data in that table. Now I want to move data into new table which normal text format table.
I am running below query.
insert into table db_name.tab_name PARTITION (process_name,year, month)
select intcentralstagingdataid, intpullrunid,intdynamicrunid,bintcrawlingid,
nvcrcrawlingtimestamp, nvcrcrawlingstatus, nvcrsupplier, nvcrnumberofnights,
nvcrcheckindate, nvcrcheckoutdate, nvcrtotalpricewithtax, nvcrpointofsale,
nvcrcity, nvcrcountry, nvcrhotelname, nvcrnumberofadults, nvcrdailyrate,
nvcrcurrency, nvcrtax, nvcrroomtype, nvcrsupplierhotel, nvcrstarrating,
nvcravailabilitystatus, nvcrboardtype, nvcrpromotion,
nvcrpromotiondescription, nvcrisb2b, nvcrpropertydescription, nvcrpostcode,
nvcrb2bhotelcode, nvcrdirectpayement, nvcrcancellationpolicy,
nvcrcustomerreviewsrating, nvcrcustomerreviewcomments, nvcrhotelfacilities,
nvcrflexprice, nvcrnonflexprice, nvcrflexcurrency, nvcrnonflexcurrency,
nvcrflexpricecapturedpage, nvcrnonflexpricecapturedpage,
nvcrcountryproxyused, dtminsertedon, nvcrroomcode, dtmrecord_date,
process_name, year, month
from db_name.tab_name
where process_name='<Condition>' AND year=<Condition> AND month=<Condition>;
But after executing this query I am getting this issue:
Vertex failed, vertexName=Map 1, vertexId=vertex_1446452616907_0014_1_00, diagnostics=[Task failed, taskId=task_1446452616907_0014_1_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.Run
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:186)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
... 13 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
... 16 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating 'Hotels4U'
at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:126)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
... 17 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 48
at org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluateBytes(ConstantVectorExpression.java:124)
at org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluate(ConstantVectorExpression.java:156)
at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:124)
... 21 more
], TaskAttempt 1 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:186)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)
... 13 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
... 16 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating 'Hotels4U'
at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:126)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:45)
... 17 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 48
at org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluateBytes(ConstantVectorExpression.java:124)
at org.apache.hadoop.hive.ql.exec.vector.expressions.ConstantVectorExpression.evaluate(ConstantVectorExpression.java:156)
at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.processOp(VectorSelectOperator.java:124)
... 21 more
], TaskAttempt 2 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:186)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
have any idea, how to resolve it?

Resources