Hadoop java.io.IOException: Mkdirs failed to create /some/path, when running mapreduce job on mac osx - macos

When I run my MR job on mac osx, I face on the following exception:
Exception in thread "main" java.io.IOException: Mkdirs failed to create /var/folders/9m/w_vzzmtx0rq0tt9whf_r4yhr0000gn/T/hadoop-unjar7688811202881231043/META-INF/license
at org.apache.hadoop.util.RunJar.ensureDirectory(RunJar.java:128)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:104)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:81)
at org.apache.hadoop.util.RunJar.run(RunJar.java:209)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
According other post, people gave alternative way to remove META-INF/LICENSE from jar file. I feel it seems temporary solution.
I think it will resolve if path trying to store tmp files below:
/var/folders/9m/.../META-INF/license
I checked permission and tried to change "hadoop.tmp.dir" value in core-site.xml, but it doesn't work for me.
PS. I know the issue is caused case-insensitive property for osx. Then, I am working with directory mounted disk image, which is case sensitive.
Thanks in advance!

Related

Mkdirs failed to create file

Give me some help...
I just operating some examples right now.
in data-algorithms-book by Partisan
I uploaded my text file on HDFS and made a run.sh file already but when i just run on my spark , It makes erorr like this
:
Mkdirs failed to create file:/output/2/_temporary/0/_temporary/attempt_201603120204_0000_m_000000_3 (exists=false, cwd=file:/home/hadoop/spark-1.6.0/work/app-20160312020423-0001/0)
and this gets me crazy... please give me some solution ...

How to eliminate Error util.Shell: Failed to locate the winutils binary

I am executing a remote job from a windows machine(the client) under eclipse, I clarify that I dont have any hadoop installation on my windows client, and I dont needed, I am executing the hadoop job remotely, and hadoop is installed on a linux machine.
Everything is executed correctly, but I would like to get rid of this ERROR:
14/09/22 11:49:49 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363)
at sun.misc.Unsafe.ensureClassInitialized(Native Method)
at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(Unknown Source)
at sun.reflect.ReflectionFactory.newFieldAccessor(Unknown Source)
at java.lang.reflect.Field.acquireFieldAccessor(Unknown Source)
at java.lang.reflect.Field.getFieldAccessor(Unknown Source)
at java.lang.reflect.Field.set(Unknown Source)
at MyFirstJob.main(MyFirstJob.java:45)
Do you know how to make this exception not hapenning ?
Install the winutils.exe, there is no other way of fixing this error.
Here is a little context: Hadoop will write some files locally (e.g. the job configs) before uploading them to the cluster. Thus it will need to set permissions, write some files or create directories.
In case it doesn't find the binary, it will fallback to the Java implementations anyway, so you don't need to worry. However, there is no built-in configuration to turn this message off, so the only way to really fixing it is to recompile your hadoop-common jar without this error (I guess installing winutils isn't that bad compared to it).
Copy org.apache.hadoop.util.Shell.java into your project.
You can comment out the below line,to remove the Error.
throw new IOException("Could not locate executable " + fullExeName + " in the Hadoop binaries.");
Also for Windows check,
Error while running Mapreduce(yarn)from windows eclipse
I saw a suggestion somewhere to just create an empty file with that name, to get rid of the error. I think I tried it once and it worked - feel free to try if it works for you. The file can be created on-the-fly if needed.

Hadoop local job directory get deleted before job is completed on task nodes

We are having a strange issue in our Hadoop cluster. We have noticed that some of our jobs fail with the with a file not found exception[see below]. Basically the files in the "attempt_*" directory and the directory itself are getting deleted while the task is still being run on the host. Looking through some of the hadoop documentation I see that the job directory gets wiped out when it gets a KillJobAction however I am not sure why it gets wiped out while the job is still running.
My question is what could be deleting it while the job is running? Any thoughts or pointers on how to debug this would be helpful.
Thanks!
java.io.FileNotFoundException: <dir>/hadoop/mapred/local_data/taskTracker/<user>/jobcache/job_201211030344_15383/attempt_201211030344_15383_m_000169_0/output/spill29.out (Permission denied)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:120)
at org.apache.hadoop.fs.RawLocalFileSystem$TrackingFileInputStream.<init>(RawLocalFileSystem.java:71)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.<init>(RawLocalFileSystem.java:107)
at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:177)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:400)
at org.apache.hadoop.mapred.Merger$Segment.init(Merger.java:205)
at org.apache.hadoop.mapred.Merger$Segment.access$100(Merger.java:165)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:418)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:381)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:77)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1692)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1322)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:698)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:765)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.Child.main(Child.java:253)
I had a similar error and found that this Permission error was caused due to the hadoop program not being able to create or access the files.
Are the files being created inside the hdfs or on any local file system.
If they are on a local file system, then try setting permissions to that folder, if they are on the hdfs folder then try setting permissions to that folder.
If you are running it on ubuntu then
try running
chmod -R a=rwx //hadoop/mapred/local_data/taskTracker//jobcache/job_201211030344_15383/attempt_201211030344_15383_m_000169_0/output/

Hadoop Basic Examples WordCount

I am getting this error with a mostly out of the box configuration from
version 0.20.203.0
Where should I look for a potential issue. Most of the configuration is out of the box. I was able to visit the local websites for hdfs, task manager.
I am guessing the error is related to a permissions issue on cygwin and windows. Also, googling the problem, they say there might be some kind of out of memory issue. It is such a simple example, I don't see how that could be.
When I try to run the wordcount examples.
$ hadoop jar hadoop-examples-0.20.203.0.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output6
I get this error:
2011-08-12 15:45:38,299 WARN org.apache.hadoop.mapred.TaskRunner:
attempt_201108121544_0001_m_000008_2 : Child Error
java.io.IOException: Task process exit with nonzero status of 127.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
2011-08-12 15:45:38,878 WARN org.apache.hadoop.mapred.TaskLog: Failed to
retrieve stdout log for task: attempt_201108121544_0001_m_000008_1
java.io.FileNotFoundException:
E:\projects\workspace_mar11\ParseLogCriticalErrors\lib\h\logs\userlogs\j
ob_201108121544_0001\attempt_201108121544_0001_m_000008_1\log.index (The
system cannot find the file specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at
org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:102)
at
org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:112)
...
The userlogs/job* directory is empty. Maybe there is some permission
issue with those directories.
I am running on windows with cygwin so I don't really know permissions
to set.
I couldn't figure out this problem with the current version of hadoop. I reverted from the current version and went to a previous release, hadoop-0.20.2. I had to play around with the core-site.xml configuration file and temp directories but I eventually got the hdfs and map reduce to work properly.
The issue seems to be cygwin, windows and the drive setup that I was using. Hadoop launches a new JVM process when it tries to invoke a 'child' map/reduce task. The actual jvm execute statement is in some shell script.
In my case, hadoop couldn't find the path to the shell script. I am assuming that status code 127 error was the result of the Java Runtime execute not finding the shell script.

cant run pig with single node hadoop server

I have setup a VM with ubuntu. It runs hadoop as a single node. Later I installed apache pig on it. apache pig runs great with local mode, but it always prom ERROR 2999: Unexpected internal error. Failed to create DataStorage
I am missing something very obvious. Can someone help me get this running please?
More details:
1. I assume that hadoop is running fine because, I could run MapReduce jobs in python.
2. pig -x local runs as i expect.
3. when i just type pig it gives me following error
Error before Pig is launched
----------------------------
ERROR 2999: Unexpected internal error. Failed to create DataStorage
java.lang.RuntimeException: Failed to create DataStorage
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.init(HDataStorage.java:75)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.(HDataStorage.java:58)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:214)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:134)
at org.apache.pig.impl.PigContext.connect(PigContext.java:183)
at org.apache.pig.PigServer.(PigServer.java:226)
at org.apache.pig.PigServer.(PigServer.java:215)
at org.apache.pig.tools.grunt.Grunt.(Grunt.java:55)
at org.apache.pig.Main.run(Main.java:452)
at org.apache.pig.Main.main(Main.java:107)
Caused by: java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.init(HDataStorage.java:72)
... 9 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
================================================================================
Link helped me understand possible cause of failure.
Here is what fixed my problem.
1. Recompile pig without hadoop.
2. Update PIG_CLASSPATH to have all the jars from $HADOOP_HOME/lib
3. Run pig.
Thanks.
set your PIG_CLASSPATH to point to your correct HADOOP_HOME installation so that Pig can pick up ur cluster information from core-site.xml,mapreduce-site.xml and hdfs-site.xml,better to follow the link for correct installation.
Just install Cygwin, then add the Cygwin path to the Path Environment Variable:
For details see here.

Resources