I study Hadoop and test Hadoop Streaming using Ruby whether my MapReduce algorithm could work without error.
So, I did next command.
hadoop jar hadoop-streaming-2.7.2.jar -files mapper.rb,reducer.rb -mapper mapper.rb -reducer reducer.rb -input test.json -output test
But, next error occurred.
dir/usercache/Kuma/appcache/application_1469093819516_0005/container_1469093819516_0005_01_000002/./mapper.rb": error=2, No such file or directory
Also, next is one job all error.
16/07/21 19:22:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
packageJobJar: [/var/folders/l_/21nh6rmn6f3fn3vd55kh0b5h0000gn/T/hadoop-unjar8708804571112102348/] [] /var/folders/l_/21nh6rmn6f3fn3vd55kh0b5h0000gn/T/streamjob5933629893966751936.jar tmpDir=null
16/07/21 19:22:05 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
16/07/21 19:22:05 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
16/07/21 19:22:06 INFO mapred.FileInputFormat: Total input paths to process : 1
16/07/21 19:22:06 INFO mapreduce.JobSubmitter: number of splits:2
16/07/21 19:22:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469093819516_0005
16/07/21 19:22:06 INFO impl.YarnClientImpl: Submitted application application_1469093819516_0005
16/07/21 19:22:06 INFO mapreduce.Job: The url to track the job: http://hatanokaoruakira-no-MacBook-Air.local:8088/proxy/application_1469093819516_0005/
16/07/21 19:22:06 INFO mapreduce.Job: Running job: job_1469093819516_0005
16/07/21 19:22:13 INFO mapreduce.Job: Job job_1469093819516_0005 running in uber mode : false
16/07/21 19:22:13 INFO mapreduce.Job: map 0% reduce 0%
16/07/21 19:22:18 INFO mapreduce.Job: Task Id : attempt_1469093819516_0005_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
... 9 more
Caused by: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
... 14 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
... 17 more
Caused by: java.lang.RuntimeException: configuration exception
at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222)
at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66)
... 22 more
Caused by: java.io.IOException: Cannot run program "/private/tmp/hadoop-Kuma/nm-local-dir/usercache/Kuma/appcache/application_1469093819516_0005/container_1469093819516_0005_01_000002/./mapper.rb": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209)
... 23 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:248)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 24 more
mapper.rb and reducer.rb exist at current directory which executed the command.
wordcount MapReduce for testing is running without error, so I think that Hadoop setting is ok.
Environment
Mac El Capitan
Hadoop 2.7.2(presudo distributed mode)
files specified with -files options must be in hdfs:
command:
hadoop jar hadoop-streaming.jar -files f1.txt,f2.txt -input f1.txt -output test1
Error:
Input path does not exist: hdfs://quickstart.cloudera:8020/user/cloudera/f1.txt
After copying files to hdfs (in your case you will need to copy to hdfs root dir - (according to error in your log)):
hadoop fs -put f1.txt /user/cloudera
hadoop fs -put f2.txt /user/cloudera
job ran with NO errors:
[cloudera#quickstart hadoop-mapreduce]$ hadoop jar hadoop-streaming.jar -files f1.txt,f2.txt -input f1.txt -output test1
packageJobJar: [] [/usr/jars/hadoop-streaming-2.6.0-cdh5.7.0.jar] /tmp/streamjob5729321067745308196.jar tmpDir=null
16/07/23 11:36:16 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/127.0.0.1:8032
16/07/23 11:36:17 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/127.0.0.1:8032
16/07/23 11:36:18 INFO mapred.FileInputFormat: Total input paths to process : 1
16/07/23 11:36:18 INFO mapreduce.JobSubmitter: number of splits:1
HadoopStreaming - Making_Files_Available_to_Tasks:
The -files and -archives options allow you to make files and archives available to the tasks. The argument is a URI to the file or archive that you have already uploaded to HDFS. These files and archives are cached across jobs. You can retrieve the host and fs_port values from the fs.default.name config variable.
Note: The -files and -archives options are generic options. Be sure to place the generic options before the command options, otherwise the command will fail.
Related
Here is the error snapshot :
[hduser#secondary ~]$ yarn jar test_word_count.jar com.test.wordc.WordCount /temp /test_output
18/10/11 16:13:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/10/11 16:13:02 INFO client.RMProxy: Connecting to ResourceManager at master/10.0.70.148:8032
18/10/11 16:13:02 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/10/11 16:13:03 INFO input.FileInputFormat: Total input paths to process : 1
18/10/11 16:13:03 INFO mapreduce.JobSubmitter: number of splits:1
18/10/11 16:13:04 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1539254507997_0001
18/10/11 16:13:04 INFO impl.YarnClientImpl: Submitted application application_1539254507997_0001
18/10/11 16:13:04 INFO mapreduce.Job: The url to track the job: http://secondary:8088/proxy/application_1539254507997_0001/
18/10/11 16:13:04 INFO mapreduce.Job: Running job: job_1539254507997_0001
18/10/11 16:13:05 INFO mapreduce.Job: Job job_1539254507997_0001 running in uber mode : false
18/10/11 16:13:05 INFO mapreduce.Job: map 0% reduce 0%
18/10/11 16:13:05 INFO mapreduce.Job: Job job_1539254507997_0001 failed with state FAILED due to: Application application_1539254507997_0001 failed 2 times due to Error launching appattempt_1539254507997_0001_000002. Got exception: java.io.IOException: Failed on local exception: java.io.IOException: java.io.EOFException; Host Details : local host is: "secondary/10.0.70.149"; destination host is: "slave4":38102;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.io.EOFException
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:643)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:730)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:553)
at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:368)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:722)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:718)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:717)
... 12 more
. Failing the application.
18/10/11 16:13:05 INFO mapreduce.Job: Counters: 0
Here is the /etc/hosts file
[hduser#secondary ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.70.149 secondary
10.0.70.148 master
10.0.70.143 slave1
10.0.70.144 slave2
10.0.70.145 slave3
10.0.70.146 slave4
10.0.70.147 slave5
all the services are running fine. It is a 6 node cluster with 5 DN and 1 NN.
After submitting the job, I get an error which is listed above. NN hadoop version is hadoop 2.6.0 while hadoop version in DN is 2.5.2. It is a maven build with hadoop version 2.6.0 in pom.xml
Fixed the error by installing hadoop 2.5.2 on the namemode iteslf. Seems like the whole cluster requires the same hadoop version.
I install Hadoop 2.9.0 and I have 4 nodes. The namenode and resourcemanager services are running on master and datanodes and nodemanagers are running on slaves. Now I wanna run a python MapReduce job. But Job not successful!
Please tell me what should I do?
Log of running of job in terminal:
hadoop#hadoopmaster:/usr/local/hadoop$ bin/hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.9.0.jar -file mapper.py -mapper mapper.py -file reducer.py -reducer reducer.py -input /user/hadoop/* -output /user/hadoop/output
18/06/17 04:26:28 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
18/06/17 04:26:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
packageJobJar: [mapper.py, /tmp/hadoop-unjar3316382199020742755/] [] /tmp/streamjob4930230269569102931.jar tmpDir=null
18/06/17 04:26:28 INFO client.RMProxy: Connecting to ResourceManager at hadoopmaster/192.168.111.175:8050
18/06/17 04:26:29 INFO client.RMProxy: Connecting to ResourceManager at hadoopmaster/192.168.111.175:8050
18/06/17 04:26:29 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1249)
at java.lang.Thread.join(Thread.java:1323)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807)
18/06/17 04:26:29 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1249)
at java.lang.Thread.join(Thread.java:1323)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807)
18/06/17 04:26:29 INFO mapred.FileInputFormat: Total input files to process : 4
18/06/17 04:26:29 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1249)
at java.lang.Thread.join(Thread.java:1323)
at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807)
18/06/17 04:26:29 INFO mapreduce.JobSubmitter: number of splits:4
18/06/17 04:26:29 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
18/06/17 04:26:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1529233655437_0004
18/06/17 04:26:30 INFO impl.YarnClientImpl: Submitted application application_1529233655437_0004
18/06/17 04:26:30 INFO mapreduce.Job: The url to track the job: http://hadoopmaster.png.com:8088/proxy/application_1529233655437_0004/
18/06/17 04:26:30 INFO mapreduce.Job: Running job: job_1529233655437_0004
18/06/17 04:45:12 INFO mapreduce.Job: Job job_1529233655437_0004 running in uber mode : false
18/06/17 04:45:12 INFO mapreduce.Job: map 0% reduce 0%
18/06/17 04:45:12 INFO mapreduce.Job: Job job_1529233655437_0004 failed with state FAILED due to: Application application_1529233655437_0004 failed 2 times due to Error launching appattempt_1529233655437_0004_000002. Got exception: org.apache.hadoop.net.ConnectTimeoutException: Call From hadoopmaster.png.com/192.168.111.175 to hadoopslave1.png.com:40569 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=hadoopslave1.png.com/192.168.111.173:40569]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout
at sun.reflect.GeneratedConstructorAccessor38.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:824)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:774)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1439)
at org.apache.hadoop.ipc.Client.call(Client.java:1349)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy82.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:128)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy83.startContainers(Unknown Source)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=hadoopslave1.png.com/192.168.111.173:40569]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:687)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:790)
at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1554)
at org.apache.hadoop.ipc.Client.call(Client.java:1385)
... 19 more
. Failing the application.
18/06/17 04:45:13 INFO mapreduce.Job: Counters: 0
18/06/17 04:45:13 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed!
OK. I found the reason of problem. In fact, the following error should be resolved:
hadoopmaster.png.com/192.168.111.175 to hadoopslave1.png.com:40569
failed on socket timeout exception
So just did:
sudo ufw allow 40569
I am running an example mapreduce job that comes with Hadoop 2.8.1
I am using these commands:
bin/hdfs dfs -copyFromLocal etc/hadoop/core-site.xml .
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep ./core-site.xml output ‘configuration’
However, when I run this, the job exits (the main error appears in the title):
***********s-mbp-2:hadoop-2.8.1 ***********$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar grep ./core-site.xml output ‘configuration’
17/09/12 14:30:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/12 14:30:29 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/09/12 14:30:29 INFO input.FileInputFormat: Total input files to process : 1
17/09/12 14:30:30 INFO mapreduce.JobSubmitter: number of splits:1
17/09/12 14:30:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1505251091124_0001
17/09/12 14:30:30 INFO impl.YarnClientImpl: Submitted application application_1505251091124_0001
17/09/12 14:30:30 INFO mapreduce.Job: The url to track the job: http://***********s-mbp-2.lan:8088/proxy/application_1505251091124_0001/
17/09/12 14:30:30 INFO mapreduce.Job: Running job: job_1505251091124_0001
17/09/12 14:30:33 INFO mapreduce.Job: Job job_1505251091124_0001 running in uber mode : false
17/09/12 14:30:33 INFO mapreduce.Job: map 0% reduce 0%
17/09/12 14:30:33 INFO mapreduce.Job: Job job_1505251091124_0001 failed with state FAILED due to: Application application_1505251091124_0001 failed 2 times due to AM Container for appattempt_1505251091124_0001_000002 exited with exitCode: 127
Failing this attempt.Diagnostics: Exception from container-launch.
Container id: container_1505251091124_0001_02_000001
Exit code: 127
Stack trace: ExitCodeException exitCode=127:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
at org.apache.hadoop.util.Shell.run(Shell.java:869)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 127
For more detailed output, check the application tracking page: http://***********s-mbp-2.lan:8088/cluster/app/application_1505251091124_0001 Then click on links to logs of each attempt.
. Failing the application.
17/09/12 14:30:33 INFO mapreduce.Job: Counters: 0
17/09/12 14:30:33 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/09/12 14:30:33 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/***********/.staging/job_1505251091124_0002
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/***********/grep-temp-576807334
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:329)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:271)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:393)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:303)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:320)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:198)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1359)
at org.apache.hadoop.examples.Grep.run(Grep.java:94)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.examples.Grep.main(Grep.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
What is going wrong, and how do I fix it?
It looks like the program can't access the path in the exception. You may try to manually create that directory with mkdir in hdfs system before running the code. Hope it will help.
I am trying to move data from one cdh(CDH4.7.1) cluster to another cdh(cdh5.4.1) cluster using distcp command as below :
hadoop distcp -D mapred.task.timeout=60000000 -update hdfs://namenodeIp of source(CDH4):8020/user/admin/distcptest1 webhdfs://namenodeIp of target(CDH5):50070/user/admin/testdir
With this command directories and subdirectories are copied from source cluster cdh4 to target cluster cdh5 but files from source cluster are not being copied to target cluster failing with the below error:
Fail to rename tmp file (=webhdfs://10.10.200.221:50070/user/admin/testdir/_distcp_tmp_g79i9w/distcptest1/account.xlsx) to destination file (=webhdfs://10.10.200.221:50070/user/admin/testdir/distcptest1/account.xlsx)
The stacktrace found in the logs of that job is as follows:
2016-02-19 03:16:57,006 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-02-19 03:16:58,686 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
2016-02-19 03:16:58,693 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2016-02-19 03:16:59,736 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2016-02-19 03:16:59,752 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#715f1f9c
2016-02-19 03:17:00,248 INFO org.apache.hadoop.mapred.MapTask: Processing split: hdfs://n1.quadratics.com:8020/user/admin/.stagingdistcp_g79i9w/_distcp_src_files:0+2443
2016-02-19 03:17:00,345 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead
2016-02-19 03:17:00,353 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2016-02-19 03:17:01,098 INFO org.apache.hadoop.tools.DistCp: FAIL distcptest1/account.xlsx : java.io.IOException: Fail to rename tmp file (=webhdfs://10.10.200.221:50070/user/admin/testdir/_distcp_tmp_g79i9w/distcptest1/account.xlsx) to destination file (=webhdfs://10.10.200.221:50070/user/admin/testdir/distcptest1/account.xlsx)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:494)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:463)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:549)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:316)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:490)
... 11 more
2016-02-19 03:17:10,457 INFO org.apache.hadoop.tools.DistCp: FAIL distcptest1/_distcp_logs_ww86cq/_logs/history/job_201602160057_0105_1455872921915_hdfs_distcp : java.io.IOException: Fail to rename tmp file (=webhdfs://10.10.200.221:50070/user/admin/testdir/_distcp_tmp_g79i9w/distcptest1/_distcp_logs_ww86cq/_logs/history/job_201602160057_0105_1455872921915_hdfs_distcp) to destination file (=webhdfs://10.10.200.221:50070/user/admin/testdir/distcptest1/_distcp_logs_ww86cq/_logs/history/job_201602160057_0105_1455872921915_hdfs_distcp)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:494)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:463)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:549)
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:316)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException
at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:490)
... 11 more
Got the above error even after using this command as well:
hadoop distcp -D mapred.task.timeout=60000000 -update webhdfs://namenodeIp of source(CDH4):50070/user/admin/distcptest1 webhdfs://namenodeIp of target(CDH5):50070/user/admin/testdir
WebHDFS is enabled in both the clusters.
Regarding execution of the distcp command I did that from my source cluster that is cdh4 with user as 'admin' and its possible as per the cloudera link given below:
http://www.cloudera.com/documentation/enterprise/5-4-x/topics/cdh_admin_distcp_data_cluster_migrate.html
When I monitored target cluster file from source cluster is not being written to temporary folder created by distcp in target cluster.That's the reason why rename is failing in the target cluster since the target path doesn't contain that file.Can someone tell why file writing is failing ?
I've searched related posts on stackoverflow and tried those solutions but none of them couldn't fix this problem.Any ideas of fixing this would be of great help.
HDFS is a user that is not able to run yarn jobs, it will most likely be a banned user in your YARN config.
If this is a secure cluster, you need a trust between both kerberos domains aswell.
I am using Hadoop 2.2.0. hadoop-mapreduce-examples-2.2.0.jar are running fine on hdfs.
I have made a wordcount program in eclipse and add the jars using maven and run this jar:
ubuntu#ubuntu-linux:~$ yarn jar Sample-0.0.1-SNAPSHOT.jar com.vij.Sample.WordCount /user/ubuntu/wordcount/input/vij.txt user/ubuntu/wordcount/output
it give following error: 15/02/17 13:09:09 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
15/02/17 13:09:10 INFO client.RMProxy: Connecting to ResourceManager
at /0.0.0.0:8032
15/02/17 13:09:11 ERROR security.UserGroupInformation:
PriviledgedActionException as:ubuntu (auth:SIMPLE)
cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output
directory hdfs://localhost:54310/user/ubuntu/wordcount/input/vij.txt
already exists
Exception in thread "main"
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory
hdfs://localhost:54310/user/ubuntu/wordcount/input/vij.txt already
exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:456)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:342)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at com.vij.Sample.WordCount.main(WordCount.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
jar is on my local system. both input and output path is on hdfs. there is no output dir exist on output path on hdfs.
please advice.
Thanks.
Acctually the error is:
ERROR security.UserGroupInformation:PriviledgedActionException as:ubuntu (auth:SIMPLE)
cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output
directory hdfs://localhost:54310/user/ubuntu/wordcount/input/vij.txt
already exists
Delete the output file that already exists "vij.txt", or output to a different file.
OR try to do the following steps:
Download and unzip the WordCount source code from this link under $HADOOP_HOME.
$ cd $HADOOP_HOME
$ wget http://salsahpc.indiana.edu/tutorial/source_code/Hadoop-WordCount.zip
$ unzip Hadoop-WordCount.zip
then , upload the input files (any text format file) into Hadoop distributed file system (HDFS):
$bin/hadoop fs -put $HADOOP_HOME/Hadoop-WordCount/input/ input
$bin/hadoop fs -ls input
Here, $HADOOP_HOME/Hadoop-WordCount/input/ is the local directory where the program inputs are stored. The second "input" represents the remote destination directory on the HDFS.
After uploading the inputs into HDFS, run the WordCount program with the following commands. We assume you have already compiled the word count program.
$ bin/hadoop jar $HADOOP_HOME/Hadoop-WordCount/wordcount.jar WordCount input output
If Hadoop is running correctly, it will print hadoop running messages similar to the following:
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated.
Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
11/11/02 18:34:46 INFO input.FileInputFormat: Total input paths to process : 1
11/11/02 18:34:46 INFO mapred.JobClient: Running job: job_201111021738_0001 11/11/02 18:34:47 INFO mapred.JobClient: map 0% reduce 0%
11/11/02 18:35:01 INFO mapred.JobClient: map 100% reduce 0%
11/11/02 18:35:13 INFO mapred.JobClient: map 100% reduce 100%
11/11/02 18:35:18 INFO mapred.JobClient: Job complete: job_201111021738_0001 11/11/02 18:35:18 INFO mapred.JobClient: Counters: 25
...