I am running simple wordcount job on 1GB of text file . My cluster has 8 Datanodes and 1 namenode each has a storage capacity of 3GB.
When i run wordcount I can see map always succeeds and reducer is throwing an error and fails. Please find below error message.
14/10/05 15:42:02 INFO mapred.JobClient: map 100% reduce 31%
14/10/05 15:42:07 INFO mapred.JobClient: Task Id : attempt_201410051534_0002_m_000016_0, Status : FAILED
FSError: java.io.IOException: No space left on device
14/10/05 15:42:14 INFO mapred.JobClient: Task Id : attempt_201410051534_0002_r_000000_0, Status : FAILED
java.io.IOException: Task: attempt_201410051534_0002_r_000000_0 - The reduce copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for file:/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201410051534_0002/attempt_201410051534_0002_r_000000_0/output/map_18.out
Could you please tell me how can i fix this problem ?
Thanks
Navaz
Related
I am trying to run s3distcp from my local hadoop pseudo cluster. As a result of executing s3distcp.jar i received the following stack-trace . It seems that reducer task is failing but I am not able to pinpoint the reason which could be causing reducer to fail :-
18/02/21 12:14:01 WARN mapred.LocalJobRunner: job_local639263089_0001
java.lang.Exception: java.lang.RuntimeException: Reducer task failed to copy 1 files: file:/home/chirag/workspaces/lzo/data-1518765365022.lzo etc
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:489)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:556)
Caused by: java.lang.RuntimeException: Reducer task failed to copy 1 files: file:/home/chirag/workspaces/lzo/data-1518765365022.lzo etc
at com.amazon.external.elasticmapreduce.s3distcp.CopyFilesReducer.close(CopyFilesReducer.java:70)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:250)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:346)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/02/21 12:14:02 INFO mapreduce.Job: Job job_local639263089_0001 running in uber mode : false
18/02/21 12:14:02 INFO mapreduce.Job: map 100% reduce 0%
18/02/21 12:14:02 INFO mapreduce.Job: Job job_local639263089_0001 failed with state FAILED due to: NA
18/02/21 12:14:02 INFO mapreduce.Job: Counters: 35
I'm getting the same error. In my case, I found logs in HDFS /var/log/hadoop-yarn/apps/hadoop/logs related to the MR job that s3-dist-cp kicks off.
hadoop fs -ls /var/log/hadoop-yarn/apps/hadoop/logs
I copied them out to local:
hadoop fs -get /var/log/hadoop-yarn/apps/hadoop/logs/application_nnnnnnnnnnnnn_nnnn/ip-nnn-nn-nn-nnn.ec2.internal_nnnn
And then examined them in a text editor to find more diagnostic information about the detailed results of the Reducer phase. In my case I was getting an error back from the S3 service. You might find a different error.
I am trying to run a wordcount job in Hadoop. Due to a previous error, I had to turn off the safe mode for the NameNode. Now, however, when trying to run the job, I am getting the following error:
14/08/06 14:49:08 INFO mapreduce.Job: map 1% reduce 0%
14/08/06 14:49:25 INFO mapreduce.Job: Task Id : attempt_1407336345567_0002_m_000158_0, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-971868671-192.168.50.2-1406571670535:blk_1073743276_2475 file=/wikidumps/enwiki-20130904-pages-meta-history3.xml-p000032706p000037161
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:838)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:526)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:164)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
The log files are not showing any particular error. Does anyone know the reason for this error? Thanks in advance!
I have one question about HIPI as I new in these field..I am trying to run simple example and my command is like.:
$> bin/hadoop jar /opt/hipi-dev/examples/downloader.jar /user/hduser/hipiFile /user/hduser/outputhipi.hib 1
Where hipiFile folder contain one text file containing 4 image url. Path in my build.xml file is right. Though it gives me following error:
14/03/09 09:44:01 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
Found host successfully: 0
Tried to get 1 nodes, got 1
14/03/09 09:44:01 INFO input.FileInputFormat: Total input paths to process : 2
First n-1 nodes responsible for 4 images
Last node responsible for 4 images
14/03/09 09:44:02 INFO mapred.JobClient: Running job: job_201403090903_0003
14/03/09 09:44:03 INFO mapred.JobClient: map 0% reduce 0%
14/03/09 09:44:13 INFO mapred.JobClient: Task Id : attempt_201403090903_0003_m_000000_0,
**Status : FAILED**
**Error:** java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
at hipi.examples.downloader.Downloader$DownloaderMapper.map(Unknown Source)
at hipi.examples.downloader.Downloader$DownloaderMapper.map(Unknown Source)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:583)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Check whether hadoop-common-*.jar's are in class path.
I am running a map reduce program which works fine on my cdh quickstart vm but when trying on a multinode cluster, it gives the below error:
WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/02/12 00:23:06 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/02/12 00:23:06 INFO input.FileInputFormat: Total input paths to process : 1
14/02/12 00:23:07 INFO mapred.JobClient: Running job: job_201401221117_5777
14/02/12 00:23:08 INFO mapred.JobClient: map 0% reduce 0%
14/02/12 00:23:16 INFO mapred.JobClient: Task Id : attempt_201401221117_5777_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class Mappercsv not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1774)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:191)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:631)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.lang.ClassNotFoundException: Class Mappercsv not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1680)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1772)
... 8 more"
Please help.
I'm running a very small Hadoop cluster on EC2.
I'm starting a cluster using whirr (version: whirr-0.2.0-incubating) with 1 (jobtracker + namenode) and 4 (datanodes + tasktracker) :
whirr.hardware-id=c1.medium
whirr.instance-templates=1 jt+nn,4 dn+tt
whirr.provider=ec2
When I run my job I'm getting the following error , not right away but after a while :
.....
11/01/17 17:31:47 INFO mapred.JobClient: map 100% reduce 66%
11/01/17 17:31:49 INFO mapred.JobClient: map 100% reduce 68%
11/01/17 17:31:52 INFO mapred.JobClient: map 100% reduce 70%
11/01/17 17:31:56 INFO mapred.JobClient: map 100% reduce 73%
11/01/17 17:32:01 INFO mapred.JobClient: Task Id : attempt_201101172141_0002_r_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/test/test2/test3/_temporary/_attempt_201101172141_0002_r_000000_0/parts/81/part-00000 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
....
I found the same error in the tasktracker log file :
....
2011-01-17 22:31:36,968 INFO org.apache.hadoop.mapred.JobTracker: Adding task (cleanup)'attempt_201101172141_0002_m_000004_1' to tip task_201101172141_0002_m_000004, for tracker 'tracker_ip-11-222-333-444.ec2.internal:localhost/127.0.0.1:44840'
2011-01-17 22:31:39,972 INFO org.apache.hadoop.mapred.JobTracker: Removed completed task 'attempt_201101172141_0002_m_000004_1' from 'tracker_ip-11-222-333-444.ec2.internal:localhost/127.0.0.1:44840'
2011-01-17 22:31:57,985 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201101172141_0002_r_000000_0: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/dsp-test/test1/test2/_temporary/_attempt_201101172141_0002_r_000000_0/parts/81/part-00000 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
In job's configuration I have:
dfs.replication=2
mapred.child.java.opts=-server -Xmx180m -XX:ErrorFile=/mnt/hadoop/logs/logs/java/java_error.log
Does anyone have an idea why I'm getting this error?
Did you run out of disk space? Sometimes if you run out of disk space it will complain in this way.