NameNode: Failed to start namenode in windows 7 - windows

I am trying to install Hadoop in windows machine, in middle I got the below error.
Logs
17/11/28 16:31:48 ERROR namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyze
Storage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:978)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:685)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:819)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:803)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1566)

Looks like you didn't install Hadoop winutils or build Hadoop with Native Libraries
Native IO is mandatory on Windows and without it you will not be able to get your installation working. You must follow all the instructions from BUILDING.txt to ensure that Native IO support is built correctly
Hadoop2 on Windows

I also have the similar issue.
I am using Hadoop-2.8.1. These steps solved the error for me.
download the winutils of your version from GitHub
Copy paste winutils at <HADOOP_HOME>/bin/
Also. double check JAVA_HOME environment is correctly set and reference in hadoop-env.cmd file

Related

xmx1000m is not recognized as an internal or external command: pig on windows

I am trying to setup pig on windows 7. I already have hadoop 2.7 single node cluster running on windows 7.
To setup pig, I have taken following steps as of now.
Downloaded the tar: http://mirror.metrocast.net/apache/pig/
Extracted tar to: C:\Users\zeba\Desktop\pig
Have set the Environment (User) Variable to:
PIG_HOME = C:\Users\zeba\Desktop\pig
PATH = C:\Users\zeba\Desktop\pig\bin
PIG_CLASSPATH = C:\Users\zeba\Desktop\hadoop\conf
Also changed HADOOP_BIN_PATH in pig.cmd to %HADOOP_HOME%\libexec as suggested by (Apache pig on windows gives "hadoop-config.cmd' is not recognized as an internal or external command" error when running "pig -x local") as was getting the same error
When I enter pig, I encounter the following error:
xmx1000m is not recognized as an internal or external command
Please help!
The error went away by installing pig-0.17.0. I was working with pig-0.16.0 previously.
Finally i got it. I changed HADOOP BIN PATH in pig.cmd to "HADOOP_HOME%\hadoop-2.9.2\libexec", as you can see "hadoop-2.9.2" is a subfile where "libexec" from my hadoop version is located..
Fix your "HADOOP_HOME" according to given image don't provide bin path only provide hadoop path.

Error in hadoop examples.jar

I just installed Hadoop from the yahoo developers network running on a vm. I ran the following code after start-all.sh after cd-ing to the bin folder
hadoop jar hadoop-0.19.0.-examples.jar pi 10 1000000
I'm getting
java. io.IOException:Error opening jon jar:hadoop-0.18.0-examples.jar
at org.apache.hadoop.util.main(RunJar.java:90) at
org.apache.hadoop.mapred.JobShell.run(JobShell.java:54) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at
org.apache.hadoop.mapred.JobShell.main(JobShell.java:68) caused
by:java.util.ZipExcaption:error in opening zip file
How do i sort this out?
Please make sure that have below things in place
Your examples.jar file is present in the path where you are running the above command. else you need to give complete path for the jar file.
hadoop jar /usr/lib/hadoop-mapreduce/*example.jar pi 10 100000
It has appropriate read permissions for the user that you are using to run the hadoop job.
If you still face issue, please update logs in your question.
You will face this issue if you are using older version of the java . Hadoop needs Java 7 or Java 8. Please check your JAVA version and update if needed.

How to eliminate Error util.Shell: Failed to locate the winutils binary

I am executing a remote job from a windows machine(the client) under eclipse, I clarify that I dont have any hadoop installation on my windows client, and I dont needed, I am executing the hadoop job remotely, and hadoop is installed on a linux machine.
Everything is executed correctly, but I would like to get rid of this ERROR:
14/09/22 11:49:49 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363)
at sun.misc.Unsafe.ensureClassInitialized(Native Method)
at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(Unknown Source)
at sun.reflect.ReflectionFactory.newFieldAccessor(Unknown Source)
at java.lang.reflect.Field.acquireFieldAccessor(Unknown Source)
at java.lang.reflect.Field.getFieldAccessor(Unknown Source)
at java.lang.reflect.Field.set(Unknown Source)
at MyFirstJob.main(MyFirstJob.java:45)
Do you know how to make this exception not hapenning ?
Install the winutils.exe, there is no other way of fixing this error.
Here is a little context: Hadoop will write some files locally (e.g. the job configs) before uploading them to the cluster. Thus it will need to set permissions, write some files or create directories.
In case it doesn't find the binary, it will fallback to the Java implementations anyway, so you don't need to worry. However, there is no built-in configuration to turn this message off, so the only way to really fixing it is to recompile your hadoop-common jar without this error (I guess installing winutils isn't that bad compared to it).
Copy org.apache.hadoop.util.Shell.java into your project.
You can comment out the below line,to remove the Error.
throw new IOException("Could not locate executable " + fullExeName + " in the Hadoop binaries.");
Also for Windows check,
Error while running Mapreduce(yarn)from windows eclipse
I saw a suggestion somewhere to just create an empty file with that name, to get rid of the error. I think I tried it once and it worked - feel free to try if it works for you. The file can be created on-the-fly if needed.

Hadoop+HBase cluster on windows: winutils not found

I'm trying to set up a fully-distributed 4-node dev cluster with Hadoop 2.20 and HBase 0.98 on Windows. I've built Hadoop on Windows successfully, and more recently, also build HBase on Windows.
We have successfully ran the wordcount example from the Hadoop installation guide, as well as a custom WebHDFS job. As HBase fully-distributed on Windows isn't supported yet, I'm running HBase under cygwin.
When trying to start hbase from my master (./bin/start-hbase.sh), I get the following error:
2014-04-17 16:22:08,599 ERROR [main] util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.conf.Configuration.getStrings(Configuration.java:1514)
at org.apache.hadoop.hbase.zookeeper.ZKConfig.makeZKProps(ZKConfig.java:113)
at org.apache.hadoop.hbase.zookeeper.ZKServerTool.main(ZKServerTool.java:46)
Looking at the Shell.java source, what is here set as null, seems to be the HADOOP_HOME environment variable. With hadoop under D:/hadoop, and HBase under cygwin root at C:/cygwin/root/usr/local/hbase, the cygwin $HADOOP_HOME variable is /cygdrive/d/hadoop/, and the Windows system environment variable %HADOOP_HOME% is D:\hadoop . Seems to me like with those two variables, the variable should be found correctly...
Also potentially relevant: I'm running Windows Server 2012 x64.
Edit: I have verified that there actually is a winutils.exe in D:\hadoop\bin\ .
We've found it. So, in Hadoop's Shell.java, you'll find that there are two options to communicate the Hadoop-path.
// first check the Dflag hadoop.home.dir with JVM scope
String home = System.getProperty("hadoop.home.dir");
// fall back to the system/user-global env variable
if (home == null) {
home = System.getenv("HADOOP_HOME");
}
After trial and error, we found that in the HBase options (HBase's hbase-env.sh, HBASE_OPTS variable), you'll need to add in this option with the Windows(!) path to Hadoop. In our case, we needed to add -Dhadoop.home.dir=D:/hadoop .
Good luck to anyone else who happens to stumble across this ;).

Hadoop 1.0.4 - file permission issue in running map reduce jobs

I am new to hadoop and need to setup a sandbox environment in windows to showcase to a client. I have followed below mentioned steps
Install cygwin on all machines
setup ssh
install hadoop 1.0.4
configure hadoop
Applied patch for hadoop-7682 bug
After lot of hit and trial I was successfully able to run all the components (namenode, datanode, tasktracker and jobtracker). But now I am facing problem while running map-reduce jobs and getting permission error on tmp directory. When I run word count example using following command
bin/hadoop jar hadoop*examples*.jar wordcount wcountjob wcountjob/gutenberg-output
13/03/28 23:43:29 INFO mapred.JobClient: Task Id :
attempt_201303282342_0001_m_000003_2, Status : FAILED Error
initializing attempt_201303282342_0001_m_000003_2:
java.io.IOException: Failed to set permissions of path:
c:\cygwin\usr\local\tmp\taskTracker\uswu50754 to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:662)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
at org.apache.hadoop.mapred.JobLocalizer.createLocalDirs(JobLocalizer.java:144)
at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:182)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
at java.lang.Thread.run(Thread.java:662)
I have tried setting the permissions manually but that also doesn't work. What I understand is that this due to java libraries being used that try to reset the permissions and fail. The permission patch that solved the tasktracker problem doesn't seem to solve this one.
Has anybody found a solution for this?
Can anybody point me to download location for Hadoop 0.20.2 which seems to be immune
to this problem?

Resources