I have different tables in Hive. I want to save each table in tab-separated file. How can I do it?
If I run this command, I get an error:
hive -e 'select * from myTable' > /home/mydata_tsv/myTable.tsv;
Error message:
NoViableAltException(26#[])
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1084)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:320)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1219)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1260)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1146)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:217)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:169)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:380)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:740)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:685)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
FAILED: ParseException line 1:0 cannot recognize input near 'hive' '-' 'e'
Related
For example, I have two Hive jobs, where the output of one job is used as a argument/variable in the second job. I can successfully run the following comand on terminal to get my result on the master node of the EMR cluster.
[hadoop#ip-10-6-131-223 ~]$ hive -f s3://MyProjectXYZ/bin/GetNewJobDetails_SelectAndOverwrite.hql --hivevar LatestLastUpdated=$(hive -f s3://MyProjectXYZ/bin/GetNewJobDetails_LatestLastUpdated.hql)
However, it seems I can not add a Hive step to run GetNewJobDetails_SelectAndOverwrite.hql with the Arguments textbox set as --hivevar LatestLastUpdated=$(hive -f s3://MyProjectXYZ/bin/GetNewJobDetails_LatestLastUpdated.hql).
The error is:
Details : FAILED: ParseException line 7:61 cannot recognize input near
'$' '(' 'hive' in expression specification
JAR location : command-runner.jar
Main class : None
Arguments : hive-script --run-hive-script --args -f
s3://MyProjectXYZ/bin/GetNewJobDetails_SelectAndOverwrite.hql
--hivevar LatestLastUpdated=$(hive -f s3://MyProjectXYZ/bin/GetNewJobDetails_LatestLastUpdated.hql)
Action on failure: Cancel and wait
I also tried it with command-runner.jar to run the first hive command. It still does not work:
NoViableAltException(15#[412:1: atomExpression : ( constant | (
intervalExpression )=> intervalExpression | castExpression |
extractExpression | floorExpression | caseExpression | whenExpression
| ( subQueryExpression )=> ( subQueryExpression ) -> ^(
TOK_SUBQUERY_EXPR TOK_SUBQUERY_OP subQueryExpression ) | ( function
)=> function | tableOrColumn | expressionsInParenthesis[true] );]) at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser$DFA36.specialStateTransition(HiveParser_IdentifiersParser.java:31808)
at org.antlr.runtime.DFA.predict(DFA.java:80) at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6746)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6988)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7324)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7380)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_IdentifiersParser.java:7542)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceStarExpression(HiveParser_IdentifiersParser.java:7685)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.java:7828)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceConcatenateExpression(HiveParser_IdentifiersParser.java:7967)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAmpersandExpression(HiveParser_IdentifiersParser.java:8177)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseOrExpression(HiveParser_IdentifiersParser.java:8314)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceSimilarExpressionPart(HiveParser_IdentifiersParser.java:8943)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceSimilarExpressionMain(HiveParser_IdentifiersParser.java:8816)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceSimilarExpression(HiveParser_IdentifiersParser.java:8697)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:9537)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceNotExpression(HiveParser_IdentifiersParser.java:9703)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9812)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceOrExpression(HiveParser_IdentifiersParser.java:9953)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.expression(HiveParser_IdentifiersParser.java:6686)
at
org.apache.hadoop.hive.ql.parse.HiveParser.expression(HiveParser.java:42062)
at
org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.searchCondition(HiveParser_FromClauseParser.java:6446)
at
org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.whereClause(HiveParser_FromClauseParser.java:6364)
at
org.apache.hadoop.hive.ql.parse.HiveParser.whereClause(HiveParser.java:41844)
at
org.apache.hadoop.hive.ql.parse.HiveParser.atomSelectStatement(HiveParser.java:36755)
at
org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:36987)
at
org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:36504)
at
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:35822)
at
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:35710)
at
org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:2284)
at
org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1333)
at
org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:208)
at
org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:77)
at
org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:70)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:468) at
org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317) at
org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457) at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227) at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at
org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:474)
at
org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:490)
at
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:234) at
org.apache.hadoop.util.RunJar.main(RunJar.java:148) FAILED:
ParseException line 7:61 cannot recognize input near '$' '(' 'hive' in
expression specification
You should execute the two hive commands as 2 different steps on the EMR. Also the arguments should be passed as a list instead of string. You can split your hive command by space (' '), which will return a list and pass this list as argument to the EMR step.
Reference : https://docs.aws.amazon.com/cli/latest/reference/emr/add-steps.html
I have a bucket on s3, named "mybucket".
I used to be able to load files using pyspark doing stuff like, for example:
>>> rdd = sc.wholeTextFiles('s3n://mybucket/mydirectory/*.txt')
>>> rdd.count()
108
It worked.
Now when I do exactly the same thing, instead of getting the number of files, I get the following java.lang.NullPointerException error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/spark/python/pyspark/rdd.py", line 1008, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "/root/spark/python/pyspark/rdd.py", line 999, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "/root/spark/python/pyspark/rdd.py", line 873, in fold
vals = self.mapPartitions(func).collect()
File "/root/spark/python/pyspark/rdd.py", line 776, in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/root/spark/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", line 933, in __call__
File "/root/spark/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/root/spark/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.NullPointerException
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:479)
at org.apache.hadoop.fs.Globber.listStatus(Globber.java:69)
at org.apache.hadoop.fs.Globber.glob(Globber.java:217)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1642)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:291)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:263)
at org.apache.spark.input.WholeTextFileInputFormat.setMinPartitions(WholeTextFileInputFormat.scala:55)
at org.apache.spark.rdd.WholeTextFileRDD.getPartitions(WholeTextFileRDD.scala:49)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1911)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:893)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.collect(RDD.scala:892)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:748)
What could have changed when stopping and starting the cluster that caused this error?
I use this little script to start, stop and login to ec2:
#!/bin/bash
if [[ "$1" =~ ^(login|start|stop)$ ]]; then
/usr/local/spark/spark-ec2/spark-ec2 -k aws1 --identity-file=/home/myusername/mydirectory/aws1.pem --region=us-west-2 --zone=us-west-2a --copy-aws-credentials "$1" my_cluster
else
echo "\"$1\" is not a valid command"
fi
While trying to import a table from RDBMS Sqoop is erroring out with following error log:
>Error: java.io.IOException: SQLException in nextKeyValue
> at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
> at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
> at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
>Caused by: java.sql.SQLException: Numeric Overflow
> at oracle.jdbc.driver.NumberCommonAccessor.throwOverflow(NumberCommonAccessor.java:4170)
> at oracle.jdbc.driver.NumberCommonAccessor.getBigDecimal(NumberCommonAccessor.java:2376)
> at oracle.jdbc.driver.GeneratedStatement.getBigDecimal(GeneratedStatement.java:96)
> at oracle.jdbc.driver.GeneratedScrollableResultSet.getBigDecimal(GeneratedScrollableResultSet.java:126)
> at org.apache.sqoop.lib.JdbcWritableBridge.readBigDecimal(JdbcWritableBridge.java:126)
> at com.cloudera.sqoop.lib.JdbcWritableBridge.readBigDecimal(JdbcWritableBridge.java:97)
> at QueryResult.readFields(QueryResult.java:3639)
> at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:244)
> ... 12 more
>Container killed by the ApplicationMaster.
>Container killed on request. Exit code is 143
>Container exited with a non-zero exit code 143
Though error is understandable & it seems that some column value is having a value which can not fit into one of data type decided by sqoop, but issue is there are 180 odd columns & millions of records then how to identify which column is creating this trouble.
How to debug & fix this?
I am trying to run RandomWalkWith Restart example https://github.com/apache/giraph/blob/release-1.0/giraph-examples/src/main/java/org/apache/giraph/examples/RandomWalkWithRestartVertex.java
My Input is data is
12 34 56
34 78
56 34 78
78 34
and I am running
hadoop jar giraph-examples-1.1.0-for-hadoop-2.2.0-jar-with-dependencies.jar GiraphRunner -Dgiraph.zkList=<host>:port -libjars giraph-examples-1.1.0-for-hadoop-2.2.0-jar-with-dependencies.jar
org.apache.giraph.examples.RandomWalkWithRestartComputation
-mc org.apache.giraph.examples.RandomWalkVertexMasterCompute
-wc org.apache.giraph.examples.RandomWalkWorkerContext
-vof org.apache.giraph.examples.VertexWithDoubleValueDoubleEdgeTextOutputFormat
-vif org.apache.giraph.examples.LongDoubleDoubleTextInputFormat
-vip giraph_algorithms/personalized_pr/input/graph.txt
-op giraph_algorithms/personalized_pr/out1 -w 1
But I am getting this error.. :-/
Error: java.lang.IllegalStateException: run: Caught an unrecoverable exception
For input string: "PK�uE META-INF/��PKPK�uEMETA-INF/MANIFEST.MF�M��LK-.�" at
org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:101) at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554) at
org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by:
java.lang.NumberFormatException: For input string: "PK�uE META-INF/��PKPK�uEMETA-
INF/MANIFEST.MF�M��LK-.�" at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at
java.lang.Long.parseLong(Long.java:441) at java.lang.Long.parseLong(Long.java:483) at
org.apache.giraph.examples.RandomWalkWorkerContext.initializeSources(
RandomWalkWorkerContext.java:131) at org.apache.giraph.examples.RandomWalkWorkerContext.
setStaticVars(RandomWalkWorkerContext.java:160) at
org.apache.giraph.examples.RandomWalkWorkerContext
.preApplication(RandomWalkWorkerContext.java:146) at
org.apache.giraph.graph.GraphTaskManager.workerContextPreApp(
GraphTaskManager.java:815) at
org.apache.giraph.graph.GraphTaskManager.
prepareGraphStateAndWorkerContext(GraphTaskManager.java:451) at
org.apache.giraph.graph.GraphTaskManager.execute(GraphTaskManager.java:266) at
org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:91) ... 7 more
Why is it reading manifest file.. When I specifically saying it to read a file and not even a directory?
Because you passed the libjar argument as the vertex class file.
Like the other arguments, you need to say: -D libjars=your_jar.jar.
I've installed Wso2 bam 2.3.0 and started Analytics.
I completed all the steps from "Introduction to BAM Analytics Framework"
My system Windows 7 64bit.
I've installed cygwin with packages base, net, security and updated my PATH variable by appending ;C:\cygwin\bin
After "execute" in the console, errors show up. Note that the file exists:
TID: [0] [BAM] [2013-06-18 10:31:35,383] ERROR {org.apache.hadoop.hive.ql.exec.ExecDriver} - Job Submission failed with exception 'org.apache.hadoop.util.Shell$ExitCodeException(chmod: getting attributes of `C:\\wso2\\wso2bam\\tmp\\hadoop\\staging\\gbelyaev-911074626\\.staging': No such file or directory
)'
org.apache.hadoop.util.Shell$ExitCodeException: chmod: getting attributes of `C:\\wso2\\wso2bam\\tmp\\hadoop\\staging\\gbelyaev-911074626\\.staging': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:553)
at org.apache.hadoop.fs.RawLocalFileSystem.execSetPermission(RawLocalFileSystem.java:545)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:531)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:324)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:798)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:458)
at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:728)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
{org.apache.hadoop.hive.ql.exec.ExecDriver}
TID: [0] [BAM] [2013-06-18 10:31:35,383] ERROR {org.apache.hadoop.hive.ql.exec.ExecDriver} - Job Submission failed with exception 'org.apache.hadoop.util.Shell$ExitCodeException(chmod: getting attributes of `C:\\wso2\\wso2bam\\tmp\\hadoop\\staging\\gbelyaev-911074626\\.staging': No such file or directory
)'
org.apache.hadoop.util.Shell$ExitCodeException: chmod: getting attributes of `C:\\wso2\\wso2bam\\tmp\\hadoop\\staging\\gbelyaev-911074626\\.staging': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:553)
at org.apache.hadoop.fs.RawLocalFileSystem.execSetPermission(RawLocalFileSystem.java:545)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:531)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:324)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:798)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:458)
at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:728)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
{org.apache.hadoop.hive.ql.exec.ExecDriver}
TID: [0] [BAM] [2013-06-18 10:31:37,903] ERROR {org.apache.hadoop.hive.ql.exec.Task} - Execution failed with exit status: 2 {org.apache.hadoop.hive.ql.exec.Task}
TID: [0] [BAM] [2013-06-18 10:31:37,903] ERROR {org.apache.hadoop.hive.ql.exec.Task} - Execution failed with exit status: 2 {org.apache.hadoop.hive.ql.exec.Task}
TID: [0] [BAM] [2013-06-18 10:31:37,907] ERROR {org.apache.hadoop.hive.ql.exec.Task} - Obtaining error information {org.apache.hadoop.hive.ql.exec.Task}
TID: [0] [BAM] [2013-06-18 10:31:37,907] ERROR {org.apache.hadoop.hive.ql.exec.Task} - Obtaining error information {org.apache.hadoop.hive.ql.exec.Task}
TID: [0] [BAM] [2013-06-18 10:31:37,915] ERROR {org.apache.hadoop.hive.ql.exec.Task} -
Task failed!
After reboot windows it worked fine