Hadoop not running tasks - hadoop

I have a cluster of 1 Master and 1 Slave that are connected and "probably" communicating, I have followed several guides to install and setup the cluster in which almost all of them are similar, only differences are the memory and cores assigned.
Both my master and slave have 8vcores and 32GB each, with around 600GB of SDD
However when I try to run a hadoop task I get the following message:
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output
20/11/03 15:51:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/11/03 15:51:35 INFO client.RMProxy: Connecting to ResourceManager at master/master:8032
20/11/03 15:51:36 INFO input.FileInputFormat: Total input paths to process : 1
20/11/03 15:51:36 INFO mapreduce.JobSubmitter: number of splits:1
20/11/03 15:51:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1604418534431_0001
20/11/03 15:51:36 INFO impl.YarnClientImpl: Submitted application application_1604418534431_0001
20/11/03 15:51:36 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1604418534431_0001/
20/11/03 15:51:36 INFO mapreduce.Job: Running job: job_1604418534431_0001
20/11/03 15:51:43 INFO mapreduce.Job: Job job_1604418534431_0001 running in uber mode : false
20/11/03 15:51:43 INFO mapreduce.Job: map 0% reduce 0%
20/11/03 15:51:46 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_0, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
20/11/03 15:51:49 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_1, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
20/11/03 15:51:52 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_2, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
20/11/03 15:51:57 INFO mapreduce.Job: map 100% reduce 100%
20/11/03 15:51:58 INFO mapreduce.Job: Job job_1604418534431_0001 failed with state FAILED due to: Task failed task_1604418534431_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
20/11/03 15:51:58 INFO mapreduce.Job: Counters: 16
Job Counters
Failed map tasks=4
Killed reduce tasks=1
Launched map tasks=4
Other local map tasks=3
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=3946
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=3946
Total time spent by all reduce tasks (ms)=0
Total vcore-milliseconds taken by all map tasks=3946
Total vcore-milliseconds taken by all reduce tasks=0
Total megabyte-milliseconds taken by all map tasks=4845688
Total megabyte-milliseconds taken by all reduce tasks=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
What I am trying to do is the following:
echo "hello world hello Hello" > ~/Downloads/test.txt
hadoop fs -mkdir /input
hadoop fs -put ~/Downloads/test.txt /input
hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output

I fixed the problem by changing the hostnames on the hosts file, I had to add an extra one that helped with the resolution of the hostname to the IP of it (apart from master)
Example:
<master.ip> master <hostname>
Thanks everyone for their help!

Related

mapreduce job losts connection and then reconnects in hadoop example "calculating pi 3 3"

Does anyone know why? The job always got stuck in progressing(not 0%), >sometimes it might disconnect and then reconnect, basically the job cannot be >finished!!!
Would it be the memory distributed to mapreduce too little? Looking forward help!
[debura#master mapreduce]hadoop jar hadoop-mapreduce-examples-2.7.3.jar pi 3 3
Number of Maps = 3
Samples per Map = 3
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Starting Job
19/12/05 21:04:20 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.56.110:8032
19/12/05 21:04:21 INFO input.FileInputFormat: Total input paths to process : 3
19/12/05 21:04:22 INFO mapreduce.JobSubmitter: number of splits:3
19/12/05 21:04:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1575550949758_0001
19/12/05 21:04:23 INFO impl.YarnClientImpl: Submitted application application_1575550949758_0001
19/12/05 21:04:23 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1575550949758_0001/
19/12/05 21:04:23 INFO mapreduce.Job: Running job: job_1575550949758_0001
19/12/05 21:04:30 INFO mapreduce.Job: Job job_1575550949758_0001 running in uber mode : false
19/12/05 21:04:30 INFO mapreduce.Job: map 0% reduce 0%
19/12/05 21:04:34 INFO mapreduce.Job: map 33% reduce 0%
19/12/05 21:04:45 INFO mapreduce.Job: map 33% reduce 11%
19/12/05 21:07:31 INFO mapreduce.Job: Task Id : attempt_1575550949758_0001_m_000001_0, Status : FAILED
Container launch failed for container_1575550949758_0001_01_000004 : java.net.ConnectException: Call From slave2/192.168.56.112 to localhost:42149 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor47.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
...
Then reconnects again
19/12/05 21:07:36 INFO mapreduce.Job: map 67% reduce 11%
19/12/05 21:07:37 INFO mapreduce.Job: map 67% reduce 22%
19/12/05 21:10:33 INFO mapreduce.Job: Task Id : attempt_1575550949758_0001_m_000000_1, Status : FAILED
Container launch failed for container_1575550949758_0001_01_000007 : java.net.ConnectException: Call From slave2/192.168.56.112 to localhost:42149 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
...
It appears that a datanode is not running on slave2, or the hdfs-site.xml is misconfigured to where the clients should be reading from
From slave2/192.168.56.112 to localhost:42149 failed

hadoop mahout:run org.apache.classifier.df.mapreduce.TestForest error

I'm new to Mahout and Random-Forest.I wanna classify my dataset and have built the random-forest on my three virtual Hadoop nodes.
As we know,first of all,I made the descriptor(/des.info).And I built the classifier(/user/hadoop/forest) .There was an error but it completed successfully.However I was stuck when I tried to test it.
My systems are all CentOS7 with hadoop-3.0.0.
Here is the HDFS:
[hadoop#hadoop1 ~]$ hadoop fs -ls /
Found 5 items
-rw-r--r-- 1 hadoop supergroup 8807688 2018-04-29 19:59 /des.info
-rw-r--r-- 1 hadoop supergroup 79736192 2018-04-29 19:55 /features.txt
-rw-r--r-- 1 hadoop supergroup 278 2018-05-01 03:50 /test.txt
drwx------ - hadoop supergroup 0 2018-04-29 20:05 /tmp
drwxr-xr-x - hadoop supergroup 0 2018-04-29 20:05 /user
Here is the code of the third step:
[hadoop#hadoop1 ~]$hadoop jar /opt/mahout-distribution-0.9/mahout-examples-0.9-job.jar org.apache.mahout.classifier.df.mapreduce.TestForest -i /test.txt -ds /des.info -m /user/hadoop/forest -a -mr -o prediction
2018-05-01 03:53:21,595 INFO mapreduce.Classifier: Adding the dataset to the DistributedCache
2018-05-01 03:53:21,597 INFO mapreduce.Classifier: Adding the decision forest to the DistributedCache
2018-05-01 03:53:21,600 INFO mapreduce.Classifier: Configuring the job...
2018-05-01 03:53:21,605 INFO mapreduce.Classifier: Running the job...
2018-05-01 03:53:21,702 INFO client.RMProxy: Connecting to ResourceManager at hadoop1/192.168.80.100:8032
2018-05-01 03:53:22,073 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1525056498669_0002
2018-05-01 03:53:22,507 INFO input.FileInputFormat: Total input files to process : 1
2018-05-01 03:53:23,009 INFO mapreduce.JobSubmitter: number of splits:1
2018-05-01 03:53:23,170 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2018-05-01 03:53:23,353 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1525056498669_0002
2018-05-01 03:53:23,355 INFO mapreduce.JobSubmitter: Executing with tokens: []
2018-05-01 03:53:23,663 INFO conf.Configuration: resource-types.xml not found
2018-05-01 03:53:23,663 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2018-05-01 03:53:23,813 INFO impl.YarnClientImpl: Submitted application application_1525056498669_0002
2018-05-01 03:53:23,870 INFO mapreduce.Job: The url to track the job: http://hadoop1:8088/proxy/application_1525056498669_0002/
2018-05-01 03:53:23,871 INFO mapreduce.Job: Running job: job_1525056498669_0002
2018-05-01 03:53:31,029 INFO mapreduce.Job: Job job_1525056498669_0002 running in uber mode : false
2018-05-01 03:53:31,030 INFO mapreduce.Job: map 0% reduce 0%
2018-05-01 03:53:34,089 INFO mapreduce.Job: Task Id : attempt_1525056498669_0002_m_000000_0, Status : FAILED
Error: java.lang.ArrayIndexOutOfBoundsException: 946827879
at org.apache.mahout.classifier.df.node.Node.read(Node.java:58)
at org.apache.mahout.classifier.df.DecisionForest.readFields(DecisionForest.java:197)
at org.apache.mahout.classifier.df.DecisionForest.read(DecisionForest.java:203)
at org.apache.mahout.classifier.df.DecisionForest.load(DecisionForest.java:225)
at org.apache.mahout.classifier.df.mapreduce.Classifier$CMapper.setup(Classifier.java:209)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
2018-05-01 03:53:46,822 INFO mapreduce.Job: Task Id : attempt_1525056498669_0002_m_000000_1, Status : FAILED
2018-05-01 03:53:51,342 INFO mapreduce.Job: Task Id : attempt_1525056498669_0002_m_000000_2, Status : FAILED
Error: java.lang.ArrayIndexOutOfBoundsException: 946827879
at org.apache.mahout.classifier.df.node.Node.read(Node.java:58)
at org.apache.mahout.classifier.df.DecisionForest.readFields(DecisionForest.java:197)
at org.apache.mahout.classifier.df.DecisionForest.read(DecisionForest.java:203)
at org.apache.mahout.classifier.df.DecisionForest.load(DecisionForest.java:225)
at org.apache.mahout.classifier.df.mapreduce.Classifier$CMapper.setup(Classifier.java:209)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:794)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
2018-05-01 03:54:05,463 INFO mapreduce.Job: map 100% reduce 0%
2018-05-01 03:54:06,479 INFO mapreduce.Job: Job job_1525056498669_0002 failed with state FAILED due to: Task failed task_1525056498669_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0
2018-05-01 03:54:06,556 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=28056
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=28056
Total vcore-milliseconds taken by all map tasks=28056
Total megabyte-milliseconds taken by all map tasks=28729344
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Exception in thread "main" java.lang.IllegalStateException: Job failed!
at org.apache.mahout.classifier.df.mapreduce.Classifier.run(Classifier.java:127)
at org.apache.mahout.classifier.df.mapreduce.TestForest.mapreduce(TestForest.java:188)
at org.apache.mahout.classifier.df.mapreduce.TestForest.testForest(TestForest.java:174)
at org.apache.mahout.classifier.df.mapreduce.TestForest.run(TestForest.java:146)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.mahout.classifier.df.mapreduce.TestForest.main(TestForest.java:315)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Now I'm confused very much.
Because this is incompatible with hadoop and mahout versions. The Mahout 0.9 random forest algorithm can only run on hadoop 1.x.

Could not find or load main class 256 - Yarn cluster

i'm currently running a single node yarn cluster, and for some reason, i can't execute even a example that comes with map reduce (grep, wordcount, etc). With this line i execute grep:
$HADOOP_HOME/bin/yarn jar /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar grep input output2 'dfs[a-z.]+'
This cluster was previosly running Giraph programs, but rigth now i need a Map Reduce application, so i switched it back to pure yarn. But probably i'm missing something.
All failed containers had the same error:
Container: container_1452447718890_0001_01_000002 on localhost_37976
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256
Jps results:
7261 SecondaryNameNode
7535 NodeManager
7413 ResourceManager
6928 NameNode
7593 JobHistoryServer
7047 DataNode
7733 QuorumPeerMain
8433 Jps
Main logs:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at hdnode01/192.168.0.10:8050
16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to process : 1
16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1452905418747_0001
16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application application_1452905418747_0001
16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1452905418747_0001/
16/01/15 21:53:54 INFO mapreduce.Job: Running job: job_1452905418747_0001
16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001 running in uber mode : false
16/01/15 21:54:04 INFO mapreduce.Job: map 0% reduce 0%
16/01/15 21:54:07 INFO mapreduce.Job: Task Id : attempt_1452905418747_0001_m_000000_0, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
16/01/15 21:54:11 INFO mapreduce.Job: Task Id : attempt_1452905418747_0001_m_000000_1, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
16/01/15 21:54:15 INFO mapreduce.Job: Task Id : attempt_1452905418747_0001_m_000000_2, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
16/01/15 21:54:21 INFO mapreduce.Job: map 100% reduce 100%
16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed with state FAILED due to: Task failed task_1452905418747_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=15548
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=7774
Total vcore-seconds taken by all map tasks=7774
Total megabyte-seconds taken by all map tasks=3980288
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at hdnode01/192.168.0.10:8050
16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to process : 0
16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1452905418747_0002
16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application application_1452905418747_0002
16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1452905418747_0002/
16/01/15 21:54:22 INFO mapreduce.Job: Running job: job_1452905418747_0002
16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002 running in uber mode : false
16/01/15 21:54:32 INFO mapreduce.Job: map 0% reduce 0%
16/01/15 21:54:36 INFO mapreduce.Job: Task Id : attempt_1452905418747_0002_r_000000_0, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
16/01/15 21:54:41 INFO mapreduce.Job: Task Id : attempt_1452905418747_0002_r_000000_1, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
16/01/15 21:54:46 INFO mapreduce.Job: Task Id : attempt_1452905418747_0002_r_000000_2, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
16/01/15 21:54:51 INFO mapreduce.Job: map 0% reduce 100%
16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed with state FAILED due to: Task failed task_1452905418747_0002_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1
16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
Job Counters
Failed reduce tasks=4
Launched reduce tasks=4
Total time spent by all maps in occupied slots (ms)=0
Total time spent by all reduces in occupied slots (ms)=11882
Total time spent by all reduce tasks (ms)=5941
Total vcore-seconds taken by all reduce tasks=5941
Total megabyte-seconds taken by all reduce tasks=3041792
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
I have a problem in mapred-site.xml. My mapred-site.xml was:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdnode01:54311</value>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>
<property>
<name>mapreduce.job.maps</name>
<value>4</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>256</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>256</value>
</property>
</configuration>
The two last properties were the problem. Deleting both (or using -Xmx256m instead of 256) solved my problem.

Hadoop-2.5.1 + Nutch-2.2.1: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

Command: ./crawl /urls /mydir XXXXX 2
When I run this command in Hadoop-2.5.1 and Nutch-2.2.1, I get the wrong information as following.
14/10/07 19:58:10 INFO mapreduce.Job: Running job: job_1411692996443_0016
14/10/07 19:58:17 INFO mapreduce.Job: Job job_1411692996443_0016 running in uber mode : false
14/10/07 19:58:17 INFO mapreduce.Job: map 0% reduce 0%
14/10/07 19:58:21 INFO mapreduce.Job: Task Id : attempt_1411692996443_0016_m_000000_0, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
14/10/07 19:58:26 INFO mapreduce.Job: Task Id : attempt_1411692996443_0016_m_000000_1, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
14/10/07 19:58:31 INFO mapreduce.Job: Task Id : attempt_1411692996443_0016_m_000000_2, Status : FAILED
Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
14/10/07 19:58:36 INFO mapreduce.Job: map 100% reduce 0%
14/10/07 19:58:36 INFO mapreduce.Job: Job job_1411692996443_0016 failed with state FAILED due to: Task failed task_1411692996443_0016_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/10/07 19:58:36 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=11785
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=11785
Total vcore-seconds taken by all map tasks=11785
Total megabyte-seconds taken by all map tasks=12067840
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
14/10/07 19:58:36 ERROR crawl.InjectorJob: InjectorJob: java.lang.RuntimeException: job failed: name=[/mydir]inject /urls, jobid=job_1411692996443_0016
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:55)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:233)
at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:251)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:273)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Probably you are using Gora (or smth else) compiled with Hadoop 1 (from maven repo?). You can download Gora (0.5?) and build it with Hadoop 2.
Perhaps it is just the first trouble in the series of problems.
Please notify us about your future steps.
I had similar error on nutch 2.x with hadoop 2.4.0
Recompile nutch with hadoop 2.5.1 dependencies (ivy) and exclude all hadoop 1.x dependencies - you can find them in lib - probably hadoop-core.

Hadoop distcp command not working

Hi i am trying to move my data from cluster having CDH4.3 to cluster having CDH4.5.
I am executing the following command.
hadoop distcp -update hftp://server1:50070/hbase/test/x hdfs://server2:8020/copy/
After executing i am getting the following error:
14/01/28 19:42:43 INFO tools.DistCp: srcPaths=[hftp://server1:50070/hbase/test/x]
14/01/28 19:42:43 INFO tools.DistCp: destPath=hdfs://server2:8020/copy
14/01/28 19:42:45 INFO tools.DistCp: sourcePathsCount=1
14/01/28 19:42:45 INFO tools.DistCp: filesToCopyCount=1
14/01/28 19:42:45 INFO tools.DistCp: bytesToCopyCount=1
14/01/28 19:42:46 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/01/28 19:42:47 INFO mapred.JobClient: Running job: job_201401101918_0008
14/01/28 19:42:48 INFO mapred.JobClient: map 0% reduce 0%
14/01/28 19:43:05 INFO mapred.JobClient: map 100% reduce 0%
14/01/28 19:43:07 INFO mapred.JobClient: Task Id : attempt_201401101918_0008_m_000000_0, Status : FAILED
14/01/28 19:43:08 INFO mapred.JobClient: map 0% reduce 0%
14/01/28 19:43:19 INFO mapred.JobClient: map 100% reduce 0%
14/01/28 19:43:22 INFO mapred.JobClient: Task Id : attempt_201401101918_0008_m_000000_1, Status : FAILED
java.io.IOException: Copied: 0 Skipped: 0 Failed: 1
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:582)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
14/01/28 19:43:23 INFO mapred.JobClient: map 0% reduce 0%
14/01/28 19:43:33 INFO mapred.JobClient: map 100% reduce 0%
14/01/28 19:43:35 INFO mapred.JobClient: Task Id : attempt_201401101918_0008_m_000000_2, Status : FAILED
java.io.IOException: Copied: 0 Skipped: 0 Failed: 1
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:582)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
14/01/28 19:43:36 INFO mapred.JobClient: map 0% reduce 0%
14/01/28 19:43:46 INFO mapred.JobClient: map 100% reduce 0%
14/01/28 19:43:50 INFO mapred.JobClient: map 0% reduce 0%
14/01/28 19:43:53 INFO mapred.JobClient: Job complete: job_201401101918_0008
14/01/28 19:43:53 INFO mapred.JobClient: Counters: 6
14/01/28 19:43:53 INFO mapred.JobClient: Job Counters
14/01/28 19:43:53 INFO mapred.JobClient: Failed map tasks=1
14/01/28 19:43:53 INFO mapred.JobClient: Launched map tasks=4
14/01/28 19:43:53 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=64095
14/01/28 19:43:53 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
14/01/28 19:43:53 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/01/28 19:43:53 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/01/28 19:43:53 INFO mapred.JobClient: Job Failed: NA
With failures, global counters are inaccurate; consider running with -i
Copy failed: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1388)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:667)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
You have new mail in /var/spool/mail/root
[hdfs#sdl1039 root]$ hadoop distcp -update hftp://server1:50070/hbase/test/x hdfs://server2:8020/copy hadoop distcp -update hftp://server1:50070/hbase/test/x hdfs://server2:8020/copy
14/01/28 19:46:09 INFO tools.DistCp: srcPaths=[hftp://server1:50070/hbase/test/x, hdfs://server2:8020/copy, hadoop, distcp, hftp://server1:50070/hbase/test/x]
14/01/28 19:46:09 INFO tools.DistCp: destPath=hdfs://server2:8020/copy
With failures, global counters are inaccurate; consider running with -i
Copy failed: org.apache.hadoop.mapred.InvalidInputException: Input source hadoop does not exist.
Input source distcp does not exist.
at org.apache.hadoop.tools.DistCp.checkSrcPath(DistCp.java:641)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:656)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Please guide me where i am going wrong.
I got a solution for now
hadoop distcp -update hdfs://server1:8020/hbase/test/x hdfs://server2:8020/copy/
But definatly would like to know why hftp is not working for me.
I think u have a wrong port number for hftp. 50070 is the default port for namenode web ui.
try :
hadoop distcp -update hftp://server1/hbase/test/x hdfs://server2:8020/copy/

Resources