Mkdirs failed to create file - hadoop

Give me some help...
I just operating some examples right now.
in data-algorithms-book by Partisan
I uploaded my text file on HDFS and made a run.sh file already but when i just run on my spark , It makes erorr like this
:
Mkdirs failed to create file:/output/2/_temporary/0/_temporary/attempt_201603120204_0000_m_000000_3 (exists=false, cwd=file:/home/hadoop/spark-1.6.0/work/app-20160312020423-0001/0)
and this gets me crazy... please give me some solution ...

Related

Hadoop java.io.IOException: Mkdirs failed to create /some/path, when running mapreduce job on mac osx

When I run my MR job on mac osx, I face on the following exception:
Exception in thread "main" java.io.IOException: Mkdirs failed to create /var/folders/9m/w_vzzmtx0rq0tt9whf_r4yhr0000gn/T/hadoop-unjar7688811202881231043/META-INF/license
at org.apache.hadoop.util.RunJar.ensureDirectory(RunJar.java:128)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:104)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:81)
at org.apache.hadoop.util.RunJar.run(RunJar.java:209)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
According other post, people gave alternative way to remove META-INF/LICENSE from jar file. I feel it seems temporary solution.
I think it will resolve if path trying to store tmp files below:
/var/folders/9m/.../META-INF/license
I checked permission and tried to change "hadoop.tmp.dir" value in core-site.xml, but it doesn't work for me.
PS. I know the issue is caused case-insensitive property for osx. Then, I am working with directory mounted disk image, which is case sensitive.
Thanks in advance!

Error: -copyFromLocal: java.net.UnknownHostException

I am new at Java, Hadoop etc.
I am having a problem when trying to copy a file to HDFS.
It says: "-copyFromLocal: java.net.UnknownHostException: quickstart.cloudera (...)"
How can I solve this? It is a exercise. You can see the problem in the imagem below.
Image with the problem
Image 2 with the error
Thank you very much.
As error says you need to supply the HDFS folder path as destination. So the code should be like:
hadoop fs -copyFromLocal words.txt /HDFS/Folder/Path
Almost all errors that you get while working in Hadoop are Java errors as MapReduce was mostly written in Java. But that doesnt mean there is some Java error in it.

Hadoop mkdirs fails during execution of a jar file

I am a very begineer on Hadoop. I developed a jar and tried to execute it with command below. But I got error: Mkdirs failed to create D:...\META-INF\license
I checked all permissions and gave full access but did not work.
command: hadoop jar wiki-stats.jar example/data/stats.txt example/results/
Thanks in advance

Hadoop map-Reduce program not runing

I'm new to Hadoop MapReduce. When I'm trying to run my MapReduce code using the following command:
vishal#XXXX bin/hadoop jar /user/vishal/WordCount com.WordCount.java /user/vishal/file01 /user/vishal/output.
It displays the following output:
Exception in thread "main" java.io.IOException: Error opening job jar: /user/vishal/WordCount.jar
at org.apache.hadoop.util.RunJar.main(RunJar.java:130)
Caused by: java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:131)
at java.util.jar.JarFile.<init>(JarFile.java:150)
at java.util.jar.JarFile.<init>(JarFile.java:87)
at org.apache.hadoop.util.RunJar.main(RunJar.java:128)
How can I fix this error?
Your command is asking Hadoop to run a JAR but is specifying a directory instead.
You have also added '.java' to the class name, which is not required. (This is assuming you have written the package name, com.WordCount, correctly).
First build the jar in /user/vishal/WordCount.jar (ensure this is a local directory, not HDFS) then run the command without the '.java' at the end of the class name. Also, you put a dot at the end of the command in your question, I hope that isn't there in the real command.
bin/hadoop jar /user/vishal/WordCount.jar com.WordCount /user/vishal/file01 /user/vishal/output
See the Hadoop tutorial's 'Usage' section for more.

Hadoop Basic Examples WordCount

I am getting this error with a mostly out of the box configuration from
version 0.20.203.0
Where should I look for a potential issue. Most of the configuration is out of the box. I was able to visit the local websites for hdfs, task manager.
I am guessing the error is related to a permissions issue on cygwin and windows. Also, googling the problem, they say there might be some kind of out of memory issue. It is such a simple example, I don't see how that could be.
When I try to run the wordcount examples.
$ hadoop jar hadoop-examples-0.20.203.0.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output6
I get this error:
2011-08-12 15:45:38,299 WARN org.apache.hadoop.mapred.TaskRunner:
attempt_201108121544_0001_m_000008_2 : Child Error
java.io.IOException: Task process exit with nonzero status of 127.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
2011-08-12 15:45:38,878 WARN org.apache.hadoop.mapred.TaskLog: Failed to
retrieve stdout log for task: attempt_201108121544_0001_m_000008_1
java.io.FileNotFoundException:
E:\projects\workspace_mar11\ParseLogCriticalErrors\lib\h\logs\userlogs\j
ob_201108121544_0001\attempt_201108121544_0001_m_000008_1\log.index (The
system cannot find the file specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at
org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:102)
at
org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:112)
...
The userlogs/job* directory is empty. Maybe there is some permission
issue with those directories.
I am running on windows with cygwin so I don't really know permissions
to set.
I couldn't figure out this problem with the current version of hadoop. I reverted from the current version and went to a previous release, hadoop-0.20.2. I had to play around with the core-site.xml configuration file and temp directories but I eventually got the hdfs and map reduce to work properly.
The issue seems to be cygwin, windows and the drive setup that I was using. Hadoop launches a new JVM process when it tries to invoke a 'child' map/reduce task. The actual jvm execute statement is in some shell script.
In my case, hadoop couldn't find the path to the shell script. I am assuming that status code 127 error was the result of the Java Runtime execute not finding the shell script.

Resources