Unauthorized request to start container. This token is expired - hadoop

I am getting "Unauthorized request to start container. This token is expired."
How to resovle it. The problem is reported on different forums, but I could not find an solution to it.
Below is the execution log
15/02/26 16:41:02 INFO impl.YarnClientImpl: Submitted application application_1424968835929_0001
15/02/26 16:41:02 INFO mapreduce.Job: The url to track the job: http://101-master15:8088/proxy/application_1424968835929_0001/
15/02/26 16:41:02 INFO mapreduce.Job: Running job: job_1424968835929_0001
15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 running in uber mode : false
15/02/26 16:41:04 INFO mapreduce.Job: map 0% reduce 0%
15/02/26 16:41:04 INFO mapreduce.Job: Job job_1424968835929_0001 failed with state FAILED due to: Application application_1424968835929_0001 failed 2 times due to Error launching appattempt_1424968835929_0001_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1424969604829 found 1424969463686
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:249)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
. Failing the application.
15/02/26 16:41:04 INFO mapreduce.Job: Counters: 0
Time taken: 0 days, 0 hours, 0 minutes, 9 seconds.

This exception occurs when your nodes have different time settings. Make sure that your all 3 nodes have same time n timezone settings and then restart computer.
This worked for me . Hope this help to you as well !!!!

Related

mapreduce job losts connection and then reconnects in hadoop example "calculating pi 3 3"

Does anyone know why? The job always got stuck in progressing(not 0%), >sometimes it might disconnect and then reconnect, basically the job cannot be >finished!!!
Would it be the memory distributed to mapreduce too little? Looking forward help!
[debura#master mapreduce]hadoop jar hadoop-mapreduce-examples-2.7.3.jar pi 3 3
Number of Maps = 3
Samples per Map = 3
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Starting Job
19/12/05 21:04:20 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.56.110:8032
19/12/05 21:04:21 INFO input.FileInputFormat: Total input paths to process : 3
19/12/05 21:04:22 INFO mapreduce.JobSubmitter: number of splits:3
19/12/05 21:04:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1575550949758_0001
19/12/05 21:04:23 INFO impl.YarnClientImpl: Submitted application application_1575550949758_0001
19/12/05 21:04:23 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1575550949758_0001/
19/12/05 21:04:23 INFO mapreduce.Job: Running job: job_1575550949758_0001
19/12/05 21:04:30 INFO mapreduce.Job: Job job_1575550949758_0001 running in uber mode : false
19/12/05 21:04:30 INFO mapreduce.Job: map 0% reduce 0%
19/12/05 21:04:34 INFO mapreduce.Job: map 33% reduce 0%
19/12/05 21:04:45 INFO mapreduce.Job: map 33% reduce 11%
19/12/05 21:07:31 INFO mapreduce.Job: Task Id : attempt_1575550949758_0001_m_000001_0, Status : FAILED
Container launch failed for container_1575550949758_0001_01_000004 : java.net.ConnectException: Call From slave2/192.168.56.112 to localhost:42149 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor47.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
...
Then reconnects again
19/12/05 21:07:36 INFO mapreduce.Job: map 67% reduce 11%
19/12/05 21:07:37 INFO mapreduce.Job: map 67% reduce 22%
19/12/05 21:10:33 INFO mapreduce.Job: Task Id : attempt_1575550949758_0001_m_000000_1, Status : FAILED
Container launch failed for container_1575550949758_0001_01_000007 : java.net.ConnectException: Call From slave2/192.168.56.112 to localhost:42149 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
...
It appears that a datanode is not running on slave2, or the hdfs-site.xml is misconfigured to where the clients should be reading from
From slave2/192.168.56.112 to localhost:42149 failed

Hadoop: NullPointerException when redirecting to job history server

I have a Hadoop cluster (HDP 2.1). Everything has been working for a long time, but suddenly jobs have started to return the following recurrent error:
16/10/13 16:21:11 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
16/10/13 16:21:12 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
16/10/13 16:21:12 INFO impl.TimelineClientImpl: Timeline service address: http://dev-fiwr-bignode-12.hi.inet:8188/ws/v1/timeline/
16/10/13 16:21:13 INFO client.RMProxy: Connecting to ResourceManager at dev-fiwr-bignode-12.hi.inet/10.95.76.79:8050
16/10/13 16:21:13 INFO input.FileInputFormat: Total input paths to process : 2
16/10/13 16:21:13 INFO mapreduce.JobSubmitter: number of splits:2
16/10/13 16:21:13 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
16/10/13 16:21:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1476366871137_0003
16/10/13 16:21:14 INFO impl.YarnClientImpl: Submitted application application_1476366871137_0003
16/10/13 16:21:14 INFO mapreduce.Job: The url to track the job: http://dev-fiwr-bignode-12.hi.inet:8088/proxy/application_1476366871137_0003/
16/10/13 16:21:14 INFO mapreduce.Job: Running job: job_1476366871137_0003
16/10/13 16:21:19 INFO mapreduce.Job: Job job_1476366871137_0003 running in uber mode : false
16/10/13 16:21:19 INFO mapreduce.Job: map 0% reduce 0%
16/10/13 16:21:23 INFO mapreduce.Job: map 50% reduce 0%
16/10/13 16:21:24 INFO mapreduce.Job: map 100% reduce 0%
16/10/13 16:21:28 INFO mapreduce.Job: map 100% reduce 100%\
6/10/13 16:21:29 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/10/13 16:21:29 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/10/13 16:21:29 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
Exception in thread \"main\" java.io.IOException:
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): java.lang.NullPointerException
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getTaskAttemptCompletionEvents(HistoryClientService.java:277)
org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getTaskAttemptCompletionEvents(MRClientProtocolPBServiceImpl.java:173)
org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:283)
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
java.security.AccessController.doPrivileged(Native Method)
javax.security.auth.Subject.doAs(Subject.java:415)
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:334)
org.apache.hadoop.mapred.ClientServiceDelegate.getTaskCompletionEvents(ClientServiceDelegate.java:386)
org.apache.hadoop.mapred.YARNRunner.getTaskCompletionEvents(YARNRunner.java:539)
org.apache.hadoop.mapreduce.Job$5.run(Job.java:668)
org.apache.hadoop.mapreduce.Job$5.run(Job.java:665)
java.security.AccessController.doPrivileged(Native Method)
javax.security.auth.Subject.doAs(Subject.java:415)
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:665)
org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1366)
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1306)
dijkstra.adjacencylist.AdjacencyListDriver.jobRun(AdjacencyListDriver.java:53)
dijkstra.adjacencylist.AdjacencyListDriver.run(AdjacencyListDriver.java:31)
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
dijkstra.launch.LaunchClass.launchAdjMatrix(LaunchClass.java:226)
dijkstra.launch.LaunchClass.main(LaunchClass.java:199)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by:
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException):
java.lang.NullPointerException
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getTaskAttemptCompletionEvents(HistoryClientService.java:277)
...
Googling a bit, I've seen these issues:
https://issues.apache.org/jira/browse/MAPREDUCE-5703
https://issues.apache.org/jira/browse/MAPREDUCE-5547
They seem to be related. Nevertheless, why was the cluster running properly until now? Nothing was changed in the configuration, the clsuter is not in safe mode, the HDFS space usage is around 0.03%... Any clues? And in the case this is related to the issues above mentioned, any workaround?
Many thanks, I'll stay tuned for your answers or additional info requirements.
Your issues is similar to 5703, judging by the stack trace, and as stated in that bug:
"The method GetTaskAttemptCompletionEventsResponse() fetched a Job by calling verifyAndGetJob(), but it never checked if job was null or not, which was the root cause of this issue."
There is a job lookup using a job id, the job is not found.
In that bug it lists a scenario in which a job history server (JHS) is queried about a finished job but JHS failed to receive the info for that job.
There seems to be open issues regarding job termination and job history uploads that allow this exception to happen when job history upload fails. In the bug the issue was triggered by restarting the node writing the history before the history upload is complete, or by that node having no good nodes to write the history to.
Unfortunately, there is nothing else here that might help identify what caused the history upload to fail in your case, but that appears to be the underlying source of the issue. Your job history server has no record of the job that successfully completed.

Yarn MapReduce approximate-pi example fails exit code 1 when run as non-hadoop user

I am running a small private cluster of linux machines with Hadoop 2.6.2 and yarn. I launch yarn jobs from a linux edge node. The canned Yarn example to approximate the value of pi works perfectly when run by the hadoop (superuser, owner of the cluster) user, but fails when run from my personal account on the edge node. In both cases (hadoop, me) I run the job exactly like this:
clott#edge: /home/hadoop/hadoop-2.6.2/bin/yarn jar /home/hadoop/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar pi 2 5
It fails; the full output is below. I think the file-not-found exception is totally bogus. I think something causes the launch of the container to fail, and so there's no output to be found. What causes container launches to fail, and how can this be debugged?
Because this identical same command works perfectly when run by the hadoop user but not when run by a different account on the same edge node, I suspect a permission or other yarn configuration problem; I don't suspect a missing-jar file problem. My personal account uses the same environment variables as the hadoop account, for what that's worth.
These questions are similar but I didn't find a solution:
https://issues.cloudera.org/browse/DISTRO-577
Running a map reduce job as a different user
Yarn MapReduce Job Issue - AM Container launch error in Hadoop 2.3.0
I have tried these remedies without any success:
In core-site.xml, set the value of hadoop.tmp.dir to /tmp/temp-${user.name}
Add my personal user account to every node in the cluster
I guess that many installations run with just a single user, but I'm trying to allow two people to work together on the cluster without trashing each other's intermediate results. Am I totally nuts?
Full output:
Number of Maps = 2
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Starting Job
15/12/22 15:29:18 INFO client.RMProxy: Connecting to ResourceManager at ac1.mycompany.com/1.2.3.4:8032
15/12/22 15:29:18 INFO input.FileInputFormat: Total input paths to process : 2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: number of splits:2
15/12/22 15:29:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1450815437271_0002
15/12/22 15:29:19 INFO impl.YarnClientImpl: Submitted application application_1450815437271_0002
15/12/22 15:29:19 INFO mapreduce.Job: The url to track the job: http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/
15/12/22 15:29:19 INFO mapreduce.Job: Running job: job_1450815437271_0002
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 running in uber mode : false
15/12/22 15:29:31 INFO mapreduce.Job: map 0% reduce 0%
15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 failed with state FAILED due to: Application application_1450815437271_0002 failed 2 times due to AM Container for appattempt_1450815437271_0002_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1450815437271_0002_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/12/22 15:29:31 INFO mapreduce.Job: Counters: 0
Job Finished in 13.489 seconds
java.io.FileNotFoundException: File does not exist: hdfs://ac1.mycompany.com/user/clott/QuasiMonteCarlo_1450816156703_163431099/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1817)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1841)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Yes Manjunath Ballur you were right it was a permissions problem! Finally learned how to preserve the yarn application logs, which clearly revealed the problem. Here are the steps:
Edit yarn-site.xml and add a property to delay yarn log deletion:
<property>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>600</value>
</property>
Push yarn-site.xml to all nodes (ARGH I forgot this for a long time) and restart cluster.
Run yarn example to estimate pi as shown above, it fails. Look at http://namenode:8088/cluster/apps/FAILED to see the failed application, click on the link for the most recent failure, look at the bottom to see which nodes in the cluster were used.
Open a window on one of the nodes in the cluster where the app failed. Find the job directory, which in my case was
~hadoop/hadoop-2.6.2/logs/userlogs/application_1450815437271_0004/container_1450‌​815437271_0004_01_000001/
Et voila, I saw files stdout (only log4j bitching), stderr (nearly empty) and syslog (winner winner chicken dinner). In the syslog file I found this gem:
2015-12-23 08:31:42,376 INFO [main] org.apache.hadoop.service.AbstractService: Service JobHistoryEventHandler failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=clott, access=EXECUTE, inode="/tmp/hadoop-yarn/staging/history":hadoop:supergroup:drwxrwx---
So the problem was permissions on hdfs:///tmp/hadoop-yarn/staging/history. A simple chmod 777 put me right, I'm not fighting the group perms anymore. Now a non-hadoop non-superuser can run a yarn job.

Hadoop-2.5.1 + Nutch-2.2.1: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

Command: ./crawl /urls /mydir XXXXX 2
When I run this command in Hadoop-2.5.1 and Nutch-2.2.1, I get the wrong information as following.
14/10/07 19:58:10 INFO mapreduce.Job: Running job: job_1411692996443_0016
14/10/07 19:58:17 INFO mapreduce.Job: Job job_1411692996443_0016 running in uber mode : false
14/10/07 19:58:17 INFO mapreduce.Job: map 0% reduce 0%
14/10/07 19:58:21 INFO mapreduce.Job: Task Id : attempt_1411692996443_0016_m_000000_0, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
14/10/07 19:58:26 INFO mapreduce.Job: Task Id : attempt_1411692996443_0016_m_000000_1, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
14/10/07 19:58:31 INFO mapreduce.Job: Task Id : attempt_1411692996443_0016_m_000000_2, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
14/10/07 19:58:36 INFO mapreduce.Job: map 100% reduce 0%
14/10/07 19:58:36 INFO mapreduce.Job: Job job_1411692996443_0016 failed with state FAILED due to: Task failed task_1411692996443_0016_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/10/07 19:58:36 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=11785
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=11785
Total vcore-seconds taken by all map tasks=11785
Total megabyte-seconds taken by all map tasks=12067840
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
14/10/07 19:58:36 ERROR crawl.InjectorJob: InjectorJob: java.lang.RuntimeException: job failed: name=[/mydir]inject /urls, jobid=job_1411692996443_0016
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:55)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:233)
at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:251)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:273)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Probably you are using Gora (or smth else) compiled with Hadoop 1 (from maven repo?). You can download Gora (0.5?) and build it with Hadoop 2.
Perhaps it is just the first trouble in the series of problems.
Please notify us about your future steps.
I had similar error on nutch 2.x with hadoop 2.4.0
Recompile nutch with hadoop 2.5.1 dependencies (ivy) and exclude all hadoop 1.x dependencies - you can find them in lib - probably hadoop-core.

Hadoop mapreduce container exited with a non-zero exit code 1

I'm trying to run some hadoop program to extracting keywords of some abstracts in Ubuntu. When I run my program using Hadoop, I get the following error.
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
INFO input.FileInputFormat: Total input paths to process : 1
INFO mapreduce.JobSubmitter: number of splits:1
INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1404812840999_0001
INFO impl.YarnClientImpl: Submitted application application_1404812840999_0001
INFO mapreduce.Job: The url to track the job: http://shiva-VirtualBox:8088/proxy/application_1404812840999_0001/
INFO mapreduce.Job: Running job: job_1404812840999_0001
INFO mapreduce.Job: Job job_1404812840999_0001 running in uber mode : false
INFO mapreduce.Job: map 0% reduce 0%
INFO mapreduce.Job: Job job_1404812840999_0001 failed with state FAILED due to: Application application_1404812840999_0001 failed 2 times due to AM Container for appattempt_1404812840999_0001_000002 exited with exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.
14/07/08 14:21:44 INFO mapreduce.Job: Counters: 0
What's the cause of this error?
Note that I converted my mapreduce project to maven project for using lucene library in my code.
Is your resource manager really on the /0.0.0.0:8032? It also seams you are not using Toolrunner, so try to rewrite your mapreduce Hadoop: Implementing the Tool interface for MapReduce driver.
Hope it helps
Number of thread increased, JVM memory and CPU is fully utilised. Please increase the JVM size and increase memory limit of Mapper and reducer task.
conf.set("mapreduce.map.memory.mb", "4096");
conf.set("mapreduce.map.java.opts", "-Xmx3500m");

Resources