I'm running a Graph Traversal algorithm using map reduce, and it gives the desired output when tested without using hadoop. but on running the command :
hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar -file /home/hduser/finalmap.py -mapper 'python finalmap.py' -file /home/hduser/finalred.py -reducer 'python finalred.py' -input /Random_Walk_Input -output Random_Walk_Output1
the following happens :
16/01/27 11:03:51 INFO mapreduce.Job: map 0% reduce 0%
16/01/27 11:03:55 INFO mapreduce.Job: map 33% reduce 0%
16/01/27 11:04:02 INFO mapreduce.Job: Task Id : attempt_1453872707553_0001_m_000001_1, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
16/01/27 11:04:03 INFO mapreduce.Job: map 50% reduce 0%
16/01/27 11:04:14 INFO mapreduce.Job: Task Id : attempt_1453872707553_0001_m_000001_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
16/01/27 11:04:22 INFO mapreduce.Job: map 50% reduce 17%
16/01/27 11:04:25 INFO mapreduce.Job: map 100% reduce 100%
16/01/27 11:04:26 INFO mapreduce.Job: Job job_1453872707553_0001 failed with state FAILED due to: Task failed task_1453872707553_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
16/01/27 11:04:27 INFO mapreduce.Job: Counters: 39 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=15725173 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=413787 HDFS: Number of bytes written=0 HDFS: Number of read operations=3 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=4 Killed reduce tasks=1 Launched map tasks=5 Launched reduce tasks=1 Other local map tasks=3 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=68482 Total time spent by all reduces in occupied slots (ms)=19382 Total time spent by all map tasks (ms)=68482 Total time spent by all reduce tasks (ms)=19382 Total vcore-seconds taken by all map tasks=68482 Total vcore-seconds taken by all reduce tasks=19382 Total megabyte-seconds taken by all map tasks=70125568 Total megabyte-seconds taken by all reduce tasks=19847168 Map-Reduce Framework Map input records=17666 Map output records=767145 Map output bytes=14081829 Map output materialized bytes=15616125 Input split bytes=91 Combine input records=0 Spilled Records=767145 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=229 CPU time spent (ms)=17120 Physical memory (bytes) snapshot=269684736 Virtual memory (bytes) snapshot=852369408 Total committed heap usage (bytes)=200802304 File Input Format Counters Bytes Read=413696 16/01/27 11:04:27 ERROR streaming.StreamJob: Job not successful! Streaming Command Failed!
What does this mean? It shows mapper and reducer have executed 100% each but again says failed maps :1 and failed reduces :0
Make sure that your streaming jar version and the hadoop version match(They are of the same version number)
This fixed the error for me!!
Related
I've recently started learning how to use the Hadoop system, and decided it's time to try writing some code. Before that, I wanted to try running the examples seen in the Getting Started page. However, it does not seem to produce any visible results.
I'm currently using Hadoop version 3.3.1 using a single-node setup,
and using jdk 11.0.11. I am running this on Windows 10 (due to current development requirements).
I've used the following command on cmd:
hadoop jar %hadoop_home%/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input /output 'dfs[a-z.]+'
The output to the command:
C:\Windows\system32>hadoop jar %hadoop_home%/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input /output 'dfs[a-z.]+'
2021-12-15 00:33:10,486 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032
2021-12-15 00:33:10,800 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/E/.staging/job_1639519343908_0005
2021-12-15 00:33:11,029 INFO input.FileInputFormat: Total input files to process : 10
2021-12-15 00:33:11,108 INFO mapreduce.JobSubmitter: number of splits:10
2021-12-15 00:33:11,281 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1639519343908_0005
2021-12-15 00:33:11,281 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-12-15 00:33:11,442 INFO conf.Configuration: resource-types.xml not found
2021-12-15 00:33:11,443 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-12-15 00:33:11,497 INFO impl.YarnClientImpl: Submitted application application_1639519343908_0005
2021-12-15 00:33:11,527 INFO mapreduce.Job: The url to track the job: http://DESKTOP-S15C716:8088/proxy/application_1639519343908_0005/
2021-12-15 00:33:11,528 INFO mapreduce.Job: Running job: job_1639519343908_0005
2021-12-15 00:33:19,611 INFO mapreduce.Job: Job job_1639519343908_0005 running in uber mode : false
2021-12-15 00:33:19,615 INFO mapreduce.Job: map 0% reduce 0%
2021-12-15 00:33:31,178 INFO mapreduce.Job: map 50% reduce 0%
2021-12-15 00:33:32,263 INFO mapreduce.Job: map 60% reduce 0%
2021-12-15 00:33:39,624 INFO mapreduce.Job: map 90% reduce 0%
2021-12-15 00:33:40,632 INFO mapreduce.Job: map 100% reduce 0%
2021-12-15 00:33:41,636 INFO mapreduce.Job: map 100% reduce 100%
2021-12-15 00:33:41,648 INFO mapreduce.Job: Job job_1639519343908_0005 completed successfully
2021-12-15 00:33:41,760 INFO mapreduce.Job: Counters: 51
File System Counters
FILE: Number of bytes read=6
FILE: Number of bytes written=3021766
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=31877
HDFS: Number of bytes written=86
HDFS: Number of read operations=35
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
HDFS: Number of bytes read erasure-coded=0
Job Counters
Killed map tasks=1
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=89653
Total time spent by all reduces in occupied slots (ms)=8222
Total time spent by all map tasks (ms)=89653
Total time spent by all reduce tasks (ms)=8222
Total vcore-milliseconds taken by all map tasks=89653
Total vcore-milliseconds taken by all reduce tasks=8222
Total megabyte-milliseconds taken by all map tasks=91804672
Total megabyte-milliseconds taken by all reduce tasks=8419328
Map-Reduce Framework
Map input records=819
Map output records=0
Map output bytes=0
Map output materialized bytes=60
Input split bytes=1139
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=60
Reduce input records=0
Reduce output records=0
Spilled Records=0
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=90
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=2952790016
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=30738
File Output Format Counters
Bytes Written=86
2021-12-15 00:33:41,790 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032
2021-12-15 00:33:41,814 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/E/.staging/job_1639519343908_0006
2021-12-15 00:33:41,855 INFO input.FileInputFormat: Total input files to process : 1
2021-12-15 00:33:41,913 INFO mapreduce.JobSubmitter: number of splits:1
2021-12-15 00:33:41,950 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1639519343908_0006
2021-12-15 00:33:41,950 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-12-15 00:33:42,179 INFO impl.YarnClientImpl: Submitted application application_1639519343908_0006
2021-12-15 00:33:42,190 INFO mapreduce.Job: The url to track the job: http://DESKTOP-S15C716:8088/proxy/application_1639519343908_0006/
2021-12-15 00:33:42,191 INFO mapreduce.Job: Running job: job_1639519343908_0006
2021-12-15 00:33:55,301 INFO mapreduce.Job: Job job_1639519343908_0006 running in uber mode : false
2021-12-15 00:33:55,302 INFO mapreduce.Job: map 0% reduce 0%
2021-12-15 00:34:00,336 INFO mapreduce.Job: map 100% reduce 0%
2021-12-15 00:34:06,366 INFO mapreduce.Job: map 100% reduce 100%
2021-12-15 00:34:07,375 INFO mapreduce.Job: Job job_1639519343908_0006 completed successfully
2021-12-15 00:34:07,404 INFO mapreduce.Job: Counters: 50
File System Counters
FILE: Number of bytes read=6
FILE: Number of bytes written=548197
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=212
HDFS: Number of bytes written=0
HDFS: Number of read operations=9
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
HDFS: Number of bytes read erasure-coded=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=3232
Total time spent by all reduces in occupied slots (ms)=3610
Total time spent by all map tasks (ms)=3232
Total time spent by all reduce tasks (ms)=3610
Total vcore-milliseconds taken by all map tasks=3232
Total vcore-milliseconds taken by all reduce tasks=3610
Total megabyte-milliseconds taken by all map tasks=3309568
Total megabyte-milliseconds taken by all reduce tasks=3696640
Map-Reduce Framework
Map input records=0
Map output records=0
Map output bytes=0
Map output materialized bytes=6
Input split bytes=126
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=6
Reduce input records=0
Reduce output records=0
Spilled Records=0
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=13
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=536870912
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=86
File Output Format Counters
Bytes Written=0
Yet when viewing the contents of the now-made 'output' folder,
I receive the following result:
hdfs dfs -ls /output
Found 2 items
-rw-r--r-- 1 E supergroup 0 2021-12-15 00:34 /output/_SUCCESS
-rw-r--r-- 1 E supergroup 0 2021-12-15 00:34 /output/part-r-00000
I.e. there's no data written to those files!
May anyone please assist me?
If you have no data in your HDFS input folder that matches the grep pattern 'dfs[a-z.]+', then the output will be empty
From the linked docs (which are for Unix, not Windows), make sure this command completed
bin/hdfs dfs -put %HADOOP_HOME%/etc/hadoop/*.xml input
And you can grep dfs $HADOOP_HOME/etc/hadoop/*.xml (at least on Unix) as well, locally, to verify there should be data output
I ran a hadoop mapreduce example by command
hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount input output
and sometimes it worked:
18/11/06 00:37:06 INFO client.RMProxy: Connecting to ResourceManager at node-0/10.10.1.1:8032
18/11/06 00:37:06 INFO input.FileInputFormat: Total input paths to process : 1
18/11/06 00:37:06 INFO mapreduce.JobSubmitter: number of splits:1
18/11/06 00:37:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541484532513_0006
18/11/06 00:37:06 INFO impl.YarnClientImpl: Submitted application application_1541484532513_0006
18/11/06 00:37:06 INFO mapreduce.Job: The url to track the job: http://node-0:8088/proxy/application_1541484532513_0006/
18/11/06 00:37:06 INFO mapreduce.Job: Running job: job_1541484532513_0006
18/11/06 00:37:11 INFO mapreduce.Job: Job job_1541484532513_0006 running in uber mode : false
18/11/06 00:37:11 INFO mapreduce.Job: map 0% reduce 0%
18/11/06 00:37:15 INFO mapreduce.Job: map 100% reduce 0%
18/11/06 00:37:18 INFO mapreduce.Job: map 100% reduce 100%
18/11/06 00:37:18 INFO mapreduce.Job: Job job_1541484532513_0006 completed successfully
18/11/06 00:37:18 INFO mapreduce.Job: Counters: 44
File System Counters
FILE: Number of bytes read=216
FILE: Number of bytes written=231641
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=1300
Total time spent by all reduces in occupied slots (ms)=1265
Total time spent by all map tasks (ms)=1300
Total time spent by all reduce tasks (ms)=1265
Total vcore-seconds taken by all map tasks=1300
Total vcore-seconds taken by all reduce tasks=1265
Total megabyte-seconds taken by all map tasks=1331200
Total megabyte-seconds taken by all reduce tasks=1295360
Map-Reduce Framework
Map input records=1
Map output records=2
Map output bytes=20
Map output materialized bytes=30
Input split bytes=135
Combine input records=2
Combine output records=2
Reduce input groups=2
Reduce shuffle bytes=30
Reduce input records=2
Reduce output records=2
Spilled Records=4
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=14
CPU time spent (ms)=660
Physical memory (bytes) snapshot=402006016
Virtual memory (bytes) snapshot=4040646656
Total committed heap usage (bytes)=402653184
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=32
File Output Format Counters
Bytes Written=28
or logs may be below:
18/11/06 00:35:17 INFO mapreduce.Job: Task Id : attempt_1541484532513_0003_m_000000_1, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/11/06 00:35:21 INFO mapreduce.Job: Task Id : attempt_1541484532513_0003_m_000000_2, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/11/06 00:35:25 INFO mapreduce.Job: map 100% reduce 0%
18/11/06 00:35:29 INFO mapreduce.Job: map 100% reduce 100%
18/11/06 00:35:29 INFO mapreduce.Job: Job job_1541484532513_0003 completed successfully
18/11/06 00:35:29 INFO mapreduce.Job: Counters: 46
File System Counters
FILE: Number of bytes read=216
FILE: Number of bytes written=231635
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Job Counters
Failed map tasks=3
Launched map tasks=4
Launched reduce tasks=1
Other local map tasks=3
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=6266
Total time spent by all reduces in occupied slots (ms)=1290
Total time spent by all map tasks (ms)=6266
Total time spent by all reduce tasks (ms)=1290
Total vcore-seconds taken by all map tasks=6266
Total vcore-seconds taken by all reduce tasks=1290
Total megabyte-seconds taken by all map tasks=6416384
Total megabyte-seconds taken by all reduce tasks=1320960
Map-Reduce Framework
Map input records=1
Map output records=2
Map output bytes=20
Map output materialized bytes=30
Input split bytes=135
Combine input records=2
Combine output records=2
Reduce input groups=2
Reduce shuffle bytes=30
Reduce input records=2
Reduce output records=2
Spilled Records=4
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=14
CPU time spent (ms)=680
Physical memory (bytes) snapshot=404619264
Virtual memory (bytes) snapshot=4036009984
Total committed heap usage (bytes)=402653184
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=32
File Output Format Counters
Bytes Written=28
It's weird! It should work with such a log! It said that job.jar doesn't exist.
But sometimes, it failed, with the same operations.
18/11/06 00:36:41 INFO mapreduce.Job: Task Id : attempt_1541484532513_0005_r_000000_1, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_15414845
18/11/06 00:36:46 INFO mapreduce.Job: Task Id : attempt_1541484532513_0005_r_000000_2, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0005/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0005/job.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/11/06 00:36:52 INFO mapreduce.Job: map 100% reduce 100%
18/11/06 00:36:52 INFO mapreduce.Job: Job job_1541484532513_0005 failed with state FAILED due to: Task failed task_1541484532513_0005_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1
18/11/06 00:36:52 INFO mapreduce.Job: Counters: 35
File System Counters
FILE: Number of bytes read=186
FILE: Number of bytes written=115831
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Job Counters
Failed map tasks=1
Failed reduce tasks=4
Launched map tasks=2
Launched reduce tasks=4
Other local map tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=2217
Total time spent by all reduces in occupied slots (ms)=8012
Total time spent by all map tasks (ms)=2217
Total time spent by all reduce tasks (ms)=8012
Total vcore-seconds taken by all map tasks=2217
Total vcore-seconds taken by all reduce tasks=8012
Total megabyte-seconds taken by all map tasks=2270208
Total megabyte-seconds taken by all reduce tasks=8204288
Map-Reduce Framework
Map input records=1
Map output records=2
Map output bytes=20
Map output materialized bytes=30
Input split bytes=135
Combine input records=2
Combine output records=2
Spilled Records=2
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=7
CPU time spent (ms)=250
Physical memory (bytes) snapshot=252555264
Virtual memory (bytes) snapshot=2014208000
Total committed heap usage (bytes)=201326592
File Input Format Counters
Bytes Read=32
What happened to my experiment? Is it my mis-operation or the hadoop example's own probelems? Is there anyone who encountered the same problem? Any advice and solutions will be appreciated.
Since your job fails when it is in uber mode, the problem lies in where the Application master cannot access HDFS or those folders in HDFS.
While we find the real solution to your problem you can disable uber mode for your job like this:
hadoop jar hadoop-mapreduce-examples-2.7.1.jar -D mapreduce.job.ubertask.enable=false wordcount input output
To fix the problem totally start by clearing up your ApplicationMaster AM configurations.
EDIT: Maybe your problem is in /etc/hosts. Could you print their contents in both of the machines. Maybe you dont have a mapping from 10.10.1.2 to localhost at 10.10.1.2 machine.
I am running a terasort benchmark for hadoop using the following command:
jar /Users/karan.verma/Documents/backups/h/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar teragen -Dmapreduce.job.maps=100 1t random-data
and got the following logs printed for 100 map tasks:
18/03/27 13:06:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/03/27 13:06:04 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
18/03/27 13:06:05 INFO terasort.TeraSort: Generating -727379968 using 100
18/03/27 13:06:05 INFO mapreduce.JobSubmitter: number of splits:100
18/03/27 13:06:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1522131782827_0001
18/03/27 13:06:06 INFO impl.YarnClientImpl: Submitted application application_1522131782827_0001
18/03/27 13:06:06 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1522131782827_0001/
18/03/27 13:06:06 INFO mapreduce.Job: Running job: job_1522131782827_0001
18/03/27 13:06:16 INFO mapreduce.Job: Job job_1522131782827_0001 running in uber mode : false
18/03/27 13:06:16 INFO mapreduce.Job: map 0% reduce 0%
18/03/27 13:06:29 INFO mapreduce.Job: map 2% reduce 0%
18/03/27 13:06:31 INFO mapreduce.Job: map 3% reduce 0%
18/03/27 13:06:32 INFO mapreduce.Job: map 5% reduce 0%
....
18/03/27 13:09:27 INFO mapreduce.Job: map 100% reduce 0%
and here is the final counters as printed on console:
18/03/27 13:09:29 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=10660990
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=8594
HDFS: Number of bytes written=0
HDFS: Number of read operations=400
HDFS: Number of large read operations=0
HDFS: Number of write operations=200
Job Counters
Launched map tasks=100
Other local map tasks=100
Total time spent by all maps in occupied slots (ms)=983560
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=983560
Total vcore-milliseconds taken by all map tasks=983560
Total megabyte-milliseconds taken by all map tasks=1007165440
Map-Reduce Framework
Map input records=0
Map output records=0
Input split bytes=8594
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=9746
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=11220811776
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
and here is the output on job schedular:
Please suggest why there is no reduce task?
Your run command says that you're running teragen and not terasort. teragen simply generates data that you can then use for terasort, and so no reducers are needed.
To run terasort over the data that you've just generated, run:
hadoop jar /Users/karan.verma/Documents/backups/h/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar terasort random-data terasort-output
You should then see reducers.
No reduce tasks run when executing teragen. Here is the documentation:
TeraGen will run map tasks to generate the data and will not run any reduce tasks. The default number of map task is defined by the "mapreduce.job.maps=2" param. It's the only purpose here is to generate the 1TB of random data in the following format " 10 bytes key | 2 bytes break | 32 bytes acsii/hex | 4 bytes break | 48 bytes filler | 4 bytes break | \r\n".
I was trying to do one directory which has hundreds os small files with extension .avro
but it fails for some files with following error :
14/09/18 13:05:19 INFO mapred.JobClient: map 99% reduce 0%
14/09/18 13:05:22 INFO mapred.JobClient: map 100% reduce 0%
14/09/18 13:05:24 INFO mapred.JobClient: Task Id : attempt_201408291204_35665_m_000000_0, Status : FAILED
java.io.IOException: Copied: 32 Skipped: 0 Failed: 1
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:584)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
14/09/18 13:05:25 INFO mapred.JobClient: map 83% reduce 0%
14/09/18 13:05:32 INFO mapred.JobClient: map 100% reduce 0%
14/09/18 13:05:32 INFO mapred.JobClient: Task Id : attempt_201408291204_35665_m_000005_0, Status : FAILED
java.io.IOException: Copied: 20 Skipped: 0 Failed: 1
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:584)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
14/09/18 13:05:33 INFO mapred.JobClient: map 83% reduce 0%
14/09/18 13:05:41 INFO mapred.JobClient: map 93% reduce 0%
14/09/18 13:05:48 INFO mapred.JobClient: map 100% reduce 0%
14/09/18 13:05:51 INFO mapred.JobClient: Job complete: job_201408291204_35665
14/09/18 13:05:51 INFO mapred.JobClient: Counters: 33
14/09/18 13:05:51 INFO mapred.JobClient: File System Counters
14/09/18 13:05:51 INFO mapred.JobClient: FILE: Number of bytes read=0
14/09/18 13:05:51 INFO mapred.JobClient: FILE: Number of bytes written=1050200
14/09/18 13:05:51 INFO mapred.JobClient: FILE: Number of read operations=0
14/09/18 13:05:51 INFO mapred.JobClient: FILE: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient: FILE: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient: HDFS: Number of bytes read=782797980
14/09/18 13:05:51 INFO mapred.JobClient: HDFS: Number of bytes written=0
14/09/18 13:05:51 INFO mapred.JobClient: HDFS: Number of read operations=88
14/09/18 13:05:51 INFO mapred.JobClient: HDFS: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient: HDFS: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient: S3: Number of bytes read=0
14/09/18 13:05:51 INFO mapred.JobClient: S3: Number of bytes written=782775062
14/09/18 13:05:51 INFO mapred.JobClient: S3: Number of read operations=0
14/09/18 13:05:51 INFO mapred.JobClient: S3: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient: S3: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient: Job Counters
14/09/18 13:05:51 INFO mapred.JobClient: Launched map tasks=8
14/09/18 13:05:51 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=454335
14/09/18 13:05:51 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient: Map-Reduce Framework
14/09/18 13:05:51 INFO mapred.JobClient: Map input records=125
14/09/18 13:05:51 INFO mapred.JobClient: Map output records=53
14/09/18 13:05:51 INFO mapred.JobClient: Input split bytes=798
14/09/18 13:05:51 INFO mapred.JobClient: Spilled Records=0
14/09/18 13:05:51 INFO mapred.JobClient: CPU time spent (ms)=50250
14/09/18 13:05:51 INFO mapred.JobClient: Physical memory (bytes) snapshot=1930326016
14/09/18 13:05:51 INFO mapred.JobClient: Virtual memory (bytes) snapshot=9781469184
14/09/18 13:05:51 INFO mapred.JobClient: Total committed heap usage (bytes)=5631639552
14/09/18 13:05:51 INFO mapred.JobClient: org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
14/09/18 13:05:51 INFO mapred.JobClient: BYTES_READ=22883
14/09/18 13:05:51 INFO mapred.JobClient: distcp
14/09/18 13:05:51 INFO mapred.JobClient: Bytes copied=782769559
14/09/18 13:05:51 INFO mapred.JobClient: Bytes expected=782769559
14/09/18 13:05:51 INFO mapred.JobClient: Files copied=70
14/09/18 13:05:51 INFO mapred.JobClient: Files skipped=53
Here more snippet from JobTracker UI :
2014-09-18 13:04:24,381 INFO org.apache.hadoop.fs.s3native.NativeS3FileSystem: OutputStream for key '09/01/01/SEARCHES/_distcp_tmp_hrb8ba/part-m-00005.avro' upload complete
2014-09-18 13:04:25,136 INFO org.apache.hadoop.tools.DistCp: FAIL part-m-00005.avro : java.io.IOException: Fail to rename tmp file (=s3://magnetic-test/09/01/01/SEARCHES/_distcp_tmp_hrb8ba/part-m-00005.avro) to destination file (=s3://abc/09/01/01/SEARCHES/part-m-00005.avro)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:494)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:463)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:549)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:316)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:490)
... 11 more
Anyone know about this issue ?
Got this resolved by adding -D mapred.task.timeout=60000000 in distcp command
I tried the suggested answer, but with no luck. I experienced the issue when copying many small files (in the order of thousands, which in total did not account for more than half a gigabyte). I couldn't make distcp command work (same error as posted by OP), so switching to hadoop fs -cp was my solution. As a side note, in the same cluster, using distcp for copying other, much larger files worked ok.
I am trying to run some graph processing job on Hadoop using 1GB Input. But my reduce task are being killed by Application Master.
Here is the output
14/06/29 16:15:02 INFO mapreduce.Job: map 100% reduce 53%
14/06/29 16:15:03 INFO mapreduce.Job: map 100% reduce 57%
14/06/29 16:15:04 INFO mapreduce.Job: map 100% reduce 60%
14/06/29 16:15:05 INFO mapreduce.Job: map 100% reduce 63%
14/06/29 16:15:05 INFO mapreduce.Job: Task Id : attempt_1404050864296_0002_r_000003_0, Status : FAILED
Container killed on request. Exit code is 137
Container exited with a non-zero exit code 137
Killed by external signal
and In the end job is failed with following output.
14/06/29 16:11:58 INFO mapreduce.Job: Job job_1404050864296_0001 failed with state FAILED due to: Task failed task_1404050864296_0001_r_000001
Job failed as tasks failed. failedMaps:0 failedReduces:1
14/06/29 16:11:58 INFO mapreduce.Job: Counters: 38
File System Counters
FILE: Number of bytes read=1706752372
FILE: Number of bytes written=3414132444
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1319319669
HDFS: Number of bytes written=0
HDFS: Number of read operations=30
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Failed reduce tasks=7
Killed reduce tasks=1
Launched map tasks=10
Launched reduce tasks=8
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=12527776
Total time spent by all reduces in occupied slots (ms)=1256256
Total time spent by all map tasks (ms)=782986
Total time spent by all reduce tasks (ms)=78516
Total vcore-seconds taken by all map tasks=782986
Total vcore-seconds taken by all reduce tasks=78516
Total megabyte-seconds taken by all map tasks=6263888000
Total megabyte-seconds taken by all reduce tasks=628128000
Map-Reduce Framework
Map input records=85331845
Map output records=170663690
Map output bytes=1365309520
Map output materialized bytes=1706637020
Input split bytes=980
Combine input records=0
Spilled Records=341327380
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=2573
CPU time spent (ms)=820310
Physical memory (bytes) snapshot=18048614400
Virtual memory (bytes) snapshot=72212246528
Total committed heap usage (bytes)=28289007616
File Input Format Counters
Bytes Read=1319318689
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at pegasus.DegDist.run(DegDist.java:201)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at pegasus.DegDist.main(DegDist.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
I have checked the Logs but there is no information why reduce task was killed. Is there a way to find out why this reduce task was killed.I am intereseted in specific reason of killing Reduce Job.