Unable to run Hadoop on windows 7 - windows

I am new to Hadoop and trying to run it on Windows 7.
Whenever I am trying to run hadoop bash script, I get the following error :
'-Xmx32m' is not recognized as an internal or external command,
operable program or batch file.
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
Hadoop jar and the required libraries
credential interact with credential providers
key manage keys via the KeyProvider
daemonlog get/set the log level for each daemon
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
Also, when I run hdfs command ,
I get the following error :
-Xms1000m is not recognized as in internal or external command.
When I try to pass -Xmx and -Xms arguments, I get the following message :
Error occurred during initialization of VM
Could not reserve enough space for object heap
Can anyone help me out on this ?

The error message
is not recognized as an internal or external command
indicates that you attempted to run from the command line a program that Windows doesn't recognize. This likely has nothing to do with -Xms and -Xmx. The problem is Windows cannot find java.
Make sure you can ran java -version no matter what's the current folder you are in. If you can't, you need to add the java at the PATH environment variable.
This could also be an issue of installing java or hadoop in a folder that has spaces in the path e.g. C:\Program Files has a space in the folder and that can be a problem. If that's the cause then install java and hadoop on a different folder without spaces in the path.

Related

Snappy compressed file on HDFS appears without extension and is not readable

I configured a Map Reduce job to save output as a Sequence file compressed with Snappy. The MR job executes successfully however in HDFS the output file looks as the following:
I've expected that the file will have a .snappy extension and that it should be part-r-00000.snappy. And now I think that this may be the reason for the file to be not readable when I'm trying to read it from a local file system using this pattern hadoop fs -libjars /path/to/jar/myjar.jar -text /path/in/HDFS/to/my/file
So I'm getting the –libjars: Unknown command when executing the command:
hadoop fs –libjars /root/hd/metrics.jar -text /user/maria_dev/hd/output/part-r-00000
And when I'm using this command hadoop fs -text /user/maria_dev/hd/output/part-r-00000, I'm getting the error:
18/02/15 22:01:57 INFO compress.CodecPool: Got brand-new decompressor [.snappy]
-text: Fatal internal error
java.lang.RuntimeException: java.io.IOException: WritableName can't load class: com.hd.metrics.IpMetricsWritable
Caused by: java.lang.ClassNotFoundException: Class com.hd.ipmetrics.IpMetricsWritable not found
Could it be that the absence of the .snappy extension causes the problem? What other command should I try to read the compressed file?
The jar is in my local file system /root/hd/ Where should I place it not to cause ClassNotFoundException? Or how should I modify the command?
Instead of hadoop fs –libjars (which actually has a wrong hyphen and should be -libjars. Copy that exactly, and you won't see Unknown command)
You should be using HADOOP_CLASSPATH environment variable
export HADOOP_CLASSPATH=/root/hd/metrics.jar:${HADOOP_CLASSPATH}
hadoop fs -text /user/maria_dev/hd/output/part-r-*
The error clearly says ClassNotFoundException: Class com.hd.ipmetrics.IpMetricsWritable not found.
It means that a required library is missing in classpath.
To clarify your doubts:
Map-Reduce by default output the file as part-* and there is no
meaning of extension. Remember extension "thing" is just a metadata
usually required by windows operating system to determine suitable
program for the file. It has no meaning in linux/unix and the
system's behavior is not going to change, even though you rename the
file as .snappy (you may actually try this).
The command looks absolutely fine to inspect the snappy file, but it seems that some required jar file are not there, which is causing ClassNotFoundException.
EDIT 1:
By default hadoop picks the jar files from the path emit by below command:
$ hadoop classpath
By default it list all the hadoop core jars.
You can add your jar by executing below command on the prompt
export HADOOP_CLASSPATH=/path/to/my/custom.jar
After executing this, try checking the class path again by hadoop classpath command and you should be able to see your jar listed along with hadoop core jars.

xmx1000m is not recognized as an internal or external command: pig on windows

I am trying to setup pig on windows 7. I already have hadoop 2.7 single node cluster running on windows 7.
To setup pig, I have taken following steps as of now.
Downloaded the tar: http://mirror.metrocast.net/apache/pig/
Extracted tar to: C:\Users\zeba\Desktop\pig
Have set the Environment (User) Variable to:
PIG_HOME = C:\Users\zeba\Desktop\pig
PATH = C:\Users\zeba\Desktop\pig\bin
PIG_CLASSPATH = C:\Users\zeba\Desktop\hadoop\conf
Also changed HADOOP_BIN_PATH in pig.cmd to %HADOOP_HOME%\libexec as suggested by (Apache pig on windows gives "hadoop-config.cmd' is not recognized as an internal or external command" error when running "pig -x local") as was getting the same error
When I enter pig, I encounter the following error:
xmx1000m is not recognized as an internal or external command
Please help!
The error went away by installing pig-0.17.0. I was working with pig-0.16.0 previously.
Finally i got it. I changed HADOOP BIN PATH in pig.cmd to "HADOOP_HOME%\hadoop-2.9.2\libexec", as you can see "hadoop-2.9.2" is a subfile where "libexec" from my hadoop version is located..
Fix your "HADOOP_HOME" according to given image don't provide bin path only provide hadoop path.

Pig installation not working

I have installed Pig 0.12.0 on my box. I have also installed Java and Hadoop and have set JAVA_HOME and HADOOP_HOME paths. When Igo to the bin directory of pig installation and type the following command on my command prompt:
pig -help
it errors out with the following message:
The system cannot find the path specified.
'-Xmx1000M' is not recognized as an internal or external command,
operable program or batch file.
Whats wrong?
Should I be using cygwin? (that didnt work either)
I just installed Pig 0.12.1 on Windows 7 without Hadoop installed. I also got this error message and resolved it by setting the "JAVA" environment variable to point to the java.exe executable.
In my case, I set JAVA=C:\Progra~1\Java\jdk1.8.0_05\bin\java.exe
I also set:
JAVA_HOME=C:\Progra~1\Java\jdk1.8.0_05
PIG_HOME=C:\pig-0.12.1 (This is where I extracted pig-0.12.1.tar.gz)
and added C:\pig-0.12.1\bin to my PATH environment variable.
Hope this helps anyone else with this issue!
I know this is a very late reply, but hope this will help someone to configure pig
Using OS Windows 8.1 ProN x64
Note -> Life will be easier in configuration, if your directory path doesn't contains a whitespace.
Steps to configure Pig with Hadoop
java path location (jdk1.8.0_151)
JAVA_HOME C:\Java\jdk1.8.0_151
python path location (Python27)
C:\Python27
ant path location (apache-ant-1.10.1)
ANT_HOME F:\Hadoop\apache-ant-1.10.1
hadoop path location (hadoop-2.8.2)
HADOOP_HOME F:\Hadoop\hadoop-2.8.2
HADOOP_COMMON_LIB_NATIVE_DIR %HADOOP_HOME%\lib\native
HADOOP_CONF_DIR %HADOOP_HOME%\etc\hadoop
pig path location (pig-0.17.0)
PIG_HOME F:\Hadoop\pig-0.17.0
System Variables Path
C:\Python27\;C:\Python27\Scripts;C:\Java\jdk1.8.0_151\bin;F:\Hadoop\hadoop-2.8.2\bin;F:\Hadoop\hadoop-2.8.2\sbin;F:\Hadoop\pig-0.17.0\bin;
explore winutils-master.zip on github, pull and download, extract to %HADOOP_HOME%\bin directory
open %PIG_HOME%\bin\pig.cmd using notepad/notepad++ (recommended notepad++)
change the below said line and later save it, close it -
"set HADOOP_BIN_PATH=%HADOOP_HOME%\bin" to
"set HADOOP_BIN_PATH=%HADOOP_HOME%\libexec"
so now the pig will access hadoop-config.cmd inside %HADOOP_HOME% path as we configured earlier.
start-all.cmd from hadoop to start the cluster with all dependencies.
go to %PIG_HOME%/bin, check with pig -help (if results are as parameterized).
pig (Enter to grunt shell.)
Note -> there are much possibilities to get below exception, if you don't configure as above.
'F:\Hadoop\hadoop-2.8.2\bin\hadoop-config.cmd' is not recognized as an internal or external command, operable program or batch file.
'-Xmx1000M' is not recognized as an internal or external command, operable program or batch file.
hope, these illustrated steps will help you to configure and start the pig grunt shell, thanks.
This will solve your problem...
1.Download PIG ->http://mirrors.estointernet.in/apache/pig/pig-0.16.0/
2.Set properties->
PIG_HOME=C:\Users\lenovo\Downloads\pig-0.16.0\pig-0.16.0
path=C:\Users\lenovo\Downloads\pig-0.16.0\pig-0.16.0\bin
PIG_CLASSPATH=C:\Users\lenovo\Downloads\hadoop-2.7.3\hadoop-2.7.3\etc\hadoop(Where
core-site.xml,mapred-site.xml are present)
3.
->open the file pig.cmd(From bin directory of PIG)
->look for the line set HADOOP_BIN_PATH=%HADOOP_HOME%\bin
->replace this with set HADOOP_BIN_PATH=%HADOOP_HOME%\libexec
4.Now in command prompt give->pig `enter code here`

Failed to locate the winutils binary in the hadoop binary path

I am getting the following error while starting namenode for latest hadoop-2.2 release. I didn't find winutils exe file in hadoop bin folder. I tried below commands
$ bin/hdfs namenode -format
$ sbin/yarn-daemon.sh start resourcemanager
ERROR [main] util.Shell (Shell.java:getWinUtilsPath(303)) - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:863)
Simple Solution:
Download it from here and add to $HADOOP_HOME/bin
(Source)
IMPORTANT UPDATE:
For hadoop-2.6.0 you can download binaries from Titus Barik blog >>.
I have not only needed to point HADOOP_HOME to extracted directory [path], but also provide system property -Djava.library.path=[path]\bin to load native libs (dll).
If you face this problem when running a self-contained local application with Spark (i.e., after adding spark-assembly-x.x.x-hadoopx.x.x.jar or the Maven dependency to the project), a simpler solution would be to put winutils.exe (download from here) in "C:\winutil\bin". Then you can add winutils.exe to the hadoop home directory by adding the following line to the code:
System.setProperty("hadoop.home.dir", "c:\\\winutil\\\")
Source: Click here
If we directly take the binary distribution of Apache Hadoop 2.2.0 release and try to run it on Microsoft Windows, then we'll encounter ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path.
The binary distribution of Apache Hadoop 2.2.0 release does not contain some windows native components (like winutils.exe, hadoop.dll etc). These are required (not optional) to run Hadoop on Windows.
So you need to build windows native binary distribution of hadoop from source codes following "BUILD.txt" file located inside the source distribution of hadoop. You can follow the following posts as well for step by step guide with screen shot
Build, Install, Configure and Run Apache Hadoop 2.2.0 in Microsoft Windows OS
ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
The statement
java.io.IOException: Could not locate executable null\bin\winutils.exe
explains that the null is received when expanding or replacing an Environment Variable. If you see the Source in Shell.Java in Common Package you will find that HADOOP_HOME variable is not getting set and you are receiving null in place of that and hence the error.
So, HADOOP_HOME needs to be set for this properly or the variable hadoop.home.dir property.
Hope this helps.
Thanks,
Kamleshwar.
Winutils.exe is used for running the shell commands for SPARK.
When you need to run the Spark without installing Hadoop, you need this file.
Steps are as follows:
Download the winutils.exe from following location for hadoop 2.7.1
https://github.com/steveloughran/winutils/tree/master/hadoop-2.7.1/bin
[NOTE: If you are using separate hadoop version then please download the winutils from corresponding hadoop version folder on GITHUB from the location as mentioned above.]
Now, create a folder 'winutils' in C:\ drive. Now create a folder 'bin' inside folder 'winutils' and copy the winutils.exe in that folder.
So the location of winutils.exe will be C:\winutils\bin\winutils.exe
Now, open environment variable and set HADOOP_HOME=C:\winutils
[NOTE: Please do not add \bin in HADOOP_HOME and no need to set HADOOP_HOME in Path]
Your issue must be resolved !!
I just ran into this issue while working with Eclipse. In my case, I had the correct Hadoop version downloaded (hadoop-2.5.0-cdh5.3.0.tgz), I extracted the contents and placed it directly in my C drive. Then I went to
Eclipse->Debug/Run Configurations -> Environment (tab) -> and added
variable: HADOOP_HOME
Value: C:\hadoop-2.5.0-cdh5.3.0
You can download winutils.exe here:
http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe
Then copy it to your HADOOP_HOME/bin directory.
In Pyspark, to run local spark application using Pycharm use below lines
os.environ['HADOOP_HOME'] = "C:\\winutils"
print os.environ['HADOOP_HOME']
winutils.exe are required for hadoop to perform hadoop related commands. please download
hadoop-common-2.2.0 zip file. winutils.exe can be found in bin folder. Extract the zip file and copy it in the local hadoop/bin folder.
I was facing the same problem. Removing the bin\ from the HADOOP_HOME path solved it for me. The path for HADOOP_HOME variable should look something like.
C:\dev\hadoop2.6\
System restart may be needed. In my case, restarting the IDE was sufficient.
As most answers here refer to pretty old versions of winutils, I will leave a link to the most comprehensive repository, which supports all versions of Hadoop including the most recent ones:
https://github.com/kontext-tech/winutils
(find the directory corresponding to your Hadoop version, or try the most recent one).
If you have admin permissions on you machine.
Put bin directory into C:\winutils
The whole path should be C:\winutils\bin\winutils.exe
Set HADOOP_HOME into C:\winutils
If you don't have admin permissions or want to put the binaries into user space.
Put bin directory into C:\Users\vryabtse\AppData\Local\Programs\winutils or similar user directory.
Set HADOOP_HOME value into path to this directory.
Set up HADOOP_HOME variable in windows to resolve the problem.
You can find answer in org/apache/hadoop/hadoop-common/2.2.0/hadoop-common-2.2.0-sources.jar!/org/apache/hadoop/util/Shell.java :
IOException from
public static final String getQualifiedBinPath(String executable)
throws IOException {
// construct hadoop bin path to the specified executable
String fullExeName = HADOOP_HOME_DIR + File.separator + "bin"
+ File.separator + executable;
File exeFile = new File(fullExeName);
if (!exeFile.exists()) {
throw new IOException("Could not locate executable " + fullExeName
+ " in the Hadoop binaries.");
}
return exeFile.getCanonicalPath();
}
HADOOP_HOME_DIR from
// first check the Dflag hadoop.home.dir with JVM scope
String home = System.getProperty("hadoop.home.dir");
// fall back to the system/user-global env variable
if (home == null) {
home = System.getenv("HADOOP_HOME");
}
Download desired version of hadoop folder (Say if you are installing spark on Windows then hadoop version for which your spark is built for) from this link as zip.
Extract the zip to desired directory.
You need to have directory of the form hadoop\bin (explicitly create such hadoop\bin directory structure if you want) with bin containing all the files contained in bin folder of the downloaded hadoop. This will contain many files such as hdfs.dll, hadoop.dll etc. in addition to winutil.exe.
Now create environment variable HADOOP_HOME and set it to <path-to-hadoop-folder>\hadoop. Then add ;%HADOOP_HOME%\bin; to PATH environment variable.
Open a "new command prompt" and try rerunning your command.
Download [winutils.exe]
From URL :
https://github.com/steveloughran/winutils/hadoop-version/bin
Past it under HADOOP_HOME/bin
Note : You should Set environmental variables:
User variable:
Variable: HADOOP_HOME
Value: Hadoop or spark dir
I used "hbase-1.3.0" and "hadoop-2.7.3" versions. Setting HADOOP_HOME environment variable and copying 'winutils.exe' file under HADOOP_HOME/bin folder solves the problem on a windows os.
Attention to set HADOOP_HOME environment to the installation folder of hadoop(/bin folder is not necessary for these versions).
Additionally I preferred using cross platform tool cygwin to settle linux os functionality (as possible as it can) because Hbase team recommend linux/unix env.
I was getting the same issue in windows. I fixed it by
Downloading hadoop-common-2.2.0-bin-master from link.
Create a user variable HADOOP_HOME in Environment variable and assign the path of hadoop-common bin directory as a value.
You can verify it by running hadoop in cmd.
Restart the IDE and Run it.
I recently got the same error message while running spark application on Intellij Idea. What I did was, I downloaded the winutils.exe that is compatible with the Spark version I was running and moved it to the Spark bin directory. Then in my Intellij, I edited the configuration.
The 'Environment variables' area was empty. So, I entered HADOOP_HOME = P:\spark-2.4.7-bin-hadoop2.7
Since, the winutils.exe is in the P:\spark-2.4.7-bin-hadoop2.7\bin directory, it will locate the file while running.
So, by setting HADOOP_HOME, the null would be the HADOOP_HOME directory. Complete path would be P:\spark-2.4.7-bin-hadoop2.7\bin\winutils.exe
That was how I resolved it

Hadoop Basic Examples WordCount

I am getting this error with a mostly out of the box configuration from
version 0.20.203.0
Where should I look for a potential issue. Most of the configuration is out of the box. I was able to visit the local websites for hdfs, task manager.
I am guessing the error is related to a permissions issue on cygwin and windows. Also, googling the problem, they say there might be some kind of out of memory issue. It is such a simple example, I don't see how that could be.
When I try to run the wordcount examples.
$ hadoop jar hadoop-examples-0.20.203.0.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output6
I get this error:
2011-08-12 15:45:38,299 WARN org.apache.hadoop.mapred.TaskRunner:
attempt_201108121544_0001_m_000008_2 : Child Error
java.io.IOException: Task process exit with nonzero status of 127.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
2011-08-12 15:45:38,878 WARN org.apache.hadoop.mapred.TaskLog: Failed to
retrieve stdout log for task: attempt_201108121544_0001_m_000008_1
java.io.FileNotFoundException:
E:\projects\workspace_mar11\ParseLogCriticalErrors\lib\h\logs\userlogs\j
ob_201108121544_0001\attempt_201108121544_0001_m_000008_1\log.index (The
system cannot find the file specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at
org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:102)
at
org.apache.hadoop.mapred.TaskLog.getAllLogsFileDetails(TaskLog.java:112)
...
The userlogs/job* directory is empty. Maybe there is some permission
issue with those directories.
I am running on windows with cygwin so I don't really know permissions
to set.
I couldn't figure out this problem with the current version of hadoop. I reverted from the current version and went to a previous release, hadoop-0.20.2. I had to play around with the core-site.xml configuration file and temp directories but I eventually got the hdfs and map reduce to work properly.
The issue seems to be cygwin, windows and the drive setup that I was using. Hadoop launches a new JVM process when it tries to invoke a 'child' map/reduce task. The actual jvm execute statement is in some shell script.
In my case, hadoop couldn't find the path to the shell script. I am assuming that status code 127 error was the result of the Java Runtime execute not finding the shell script.

Resources