apache Hadoop-2.0.0 aplha version installation in full cluster using fedration - hadoop

I had installed hadoop stable version successfully. but confused while installing hadoop -2.0.0 version.
I want to install hadoop-2.0.0-alpha on two nodes, using federation on both machines. rsi-1, rsi-2 are hostnames.
what should be values of below properties for implementation of federation. Both machines are also used for datanodes too.
fs.defaulFS dfs.federation.nameservices dfs.namenode.name.dir dfs.datanode.data.dir yarn.nodemanager.localizer.address yarn.resourcemanager.resource-tracker.address yarn.resourcemanager.scheduler.address yarn.resourcemanager.address
One more point, in stable version of hadoop i have configuration files under conf folder in installation directory.
But in 2.0.0-aplha version, there is etc/hadoop directory and it doesnt have mapred-site.xml, hadoop-env.sh. do i need to copy conf folder under share folder into hadoop-home directory? or do i need to copy these files from share folder into etc/hadoop directory?
Regards, Rashmi

You can run hadoop-setup-conf.sh in sbin folder. It instructs you step-by-step to configure.
Please remember when it asks you to input the directory path, you should use full link
e.g., when it asks for conf directory, you should input /home/user/Documents/hadoop-2.0.0/etc/hadoop
After completed, remember to check every configuration file in etc/hadoop.
As my experience, I modified JAVA_HOME variable in hadoop-env.sh and some properties in core-site.xml, mapred-site.xml.
Regards

Related

how hadoop directory differ from hadoop-x.x.x

I am new to hadoop and recently when I was running MapReduce jobs on Openstack hadoop cluster and cd into directory on a datanode machine, I found there are two hadoop folders one is called "hadoop" while the other named"hadoop-2.7.1". Obviously, the latter one makes more sense as it tells the hadoop version. The two folder contains same sub-directories, but how these two differ from each other? What if I'd like to disable HDFS permission checking on this machine, which one should I go?
Here is a screenshot
As colors in the screenshot suggest, hadoop is not a separate directory but is just a symbolic link, obviously pointing to hadoop-2.7.1. Run ls -l to check this.
You should cd into hadoop directory. It exists intentionally to avoid writing hadoop version explicitly. When new version of hadoop is deployed, a new versioned directory will be created, and hadoop symbolic link will be changed to point to the latest versioned directory. Like this:
hadoop-2.7.1
hadoop-2.7.2
hadoop-2.7.3
hadoop -> hadoop-2.7.3

what is the different between configuration files under /etc/hadoop/conf and /etc/hadoop/conf.cloudera.HDFS and /etc/hadoop/conf.cloudera.YARN

I have cloudera 5.7, I have Cloudera Manager too.
under the directory /etc/hadoop, I saw three sub-directories.
/etc/hadoop/conf
/etc/hadoop/conf.cloudera.HDFS/
/etc/hadoop/conf.cloudera.YARN/
the hadoop-env.sh in ../conf/ is different from ../conf.cloudera.HDFS/..
the core-site.xml in ../conf/ is different from ../conf.cloudera.HDFS/.. as well.
the hadoop-env.sh in ../conf/ has settings on YARN, while the one under../conf.cloudera.HDFS doesn't has it.
and the one in ../conf.cloudera.HDFS/.. has the settings for Namenode, datanodes, etc.
I have CM installed, I am wondering if these configuration files are really in use?
If yes, and I need to change some environment variables, should I change all of these hadoop-env.sh? and copy it to the other nodes?
Thanks.
Cloudera Manager handle settings for you. If you edit the settings files manually, it will erase by CM.
If you want make some change, do it by CM.

Hadoop copying file to hadoop filesystem

I have copied a file from a local to the hdfs file system and the file got copied -- /user/hduser/in
hduser#vagrant:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop fs -copyFromLocal /home/hduser/afile in
Question:-
1.How does hadoop by default copies the file to this directory -- /user/hduser/in ...Where is this mapping specified in the conf file?
If you write the command like above, the file gets copied to your user's HDFS home directory, which is /home/username. See also here: HDFS Home Directory.
You can use an absolute pathname (one starting with "/") just like in a Linux filesystem, if you want to write the file to a different location.
Are u using a default vm? Basically if you configure hadoop from binaries without using the preconfigure yum package. It doesnt have a default path. But if you use yum via hortin or cloudera vm. It comes with default path i guess
Check the hdfs-site.xml to see the default fs path. So "/" will point to the base URL set in the above mentioned XML. Any folder mentioned in the command without the use of home path will be appended to that.
hadoop picks the default path defined in hdfs-site.xml and write data.
below image clear how writes works in HDFS.

Failed to locate the winutils binary in the hadoop binary path

I am getting the following error while starting namenode for latest hadoop-2.2 release. I didn't find winutils exe file in hadoop bin folder. I tried below commands
$ bin/hdfs namenode -format
$ sbin/yarn-daemon.sh start resourcemanager
ERROR [main] util.Shell (Shell.java:getWinUtilsPath(303)) - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:863)
Simple Solution:
Download it from here and add to $HADOOP_HOME/bin
(Source)
IMPORTANT UPDATE:
For hadoop-2.6.0 you can download binaries from Titus Barik blog >>.
I have not only needed to point HADOOP_HOME to extracted directory [path], but also provide system property -Djava.library.path=[path]\bin to load native libs (dll).
If you face this problem when running a self-contained local application with Spark (i.e., after adding spark-assembly-x.x.x-hadoopx.x.x.jar or the Maven dependency to the project), a simpler solution would be to put winutils.exe (download from here) in "C:\winutil\bin". Then you can add winutils.exe to the hadoop home directory by adding the following line to the code:
System.setProperty("hadoop.home.dir", "c:\\\winutil\\\")
Source: Click here
If we directly take the binary distribution of Apache Hadoop 2.2.0 release and try to run it on Microsoft Windows, then we'll encounter ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path.
The binary distribution of Apache Hadoop 2.2.0 release does not contain some windows native components (like winutils.exe, hadoop.dll etc). These are required (not optional) to run Hadoop on Windows.
So you need to build windows native binary distribution of hadoop from source codes following "BUILD.txt" file located inside the source distribution of hadoop. You can follow the following posts as well for step by step guide with screen shot
Build, Install, Configure and Run Apache Hadoop 2.2.0 in Microsoft Windows OS
ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
The statement
java.io.IOException: Could not locate executable null\bin\winutils.exe
explains that the null is received when expanding or replacing an Environment Variable. If you see the Source in Shell.Java in Common Package you will find that HADOOP_HOME variable is not getting set and you are receiving null in place of that and hence the error.
So, HADOOP_HOME needs to be set for this properly or the variable hadoop.home.dir property.
Hope this helps.
Thanks,
Kamleshwar.
Winutils.exe is used for running the shell commands for SPARK.
When you need to run the Spark without installing Hadoop, you need this file.
Steps are as follows:
Download the winutils.exe from following location for hadoop 2.7.1
https://github.com/steveloughran/winutils/tree/master/hadoop-2.7.1/bin
[NOTE: If you are using separate hadoop version then please download the winutils from corresponding hadoop version folder on GITHUB from the location as mentioned above.]
Now, create a folder 'winutils' in C:\ drive. Now create a folder 'bin' inside folder 'winutils' and copy the winutils.exe in that folder.
So the location of winutils.exe will be C:\winutils\bin\winutils.exe
Now, open environment variable and set HADOOP_HOME=C:\winutils
[NOTE: Please do not add \bin in HADOOP_HOME and no need to set HADOOP_HOME in Path]
Your issue must be resolved !!
I just ran into this issue while working with Eclipse. In my case, I had the correct Hadoop version downloaded (hadoop-2.5.0-cdh5.3.0.tgz), I extracted the contents and placed it directly in my C drive. Then I went to
Eclipse->Debug/Run Configurations -> Environment (tab) -> and added
variable: HADOOP_HOME
Value: C:\hadoop-2.5.0-cdh5.3.0
You can download winutils.exe here:
http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe
Then copy it to your HADOOP_HOME/bin directory.
In Pyspark, to run local spark application using Pycharm use below lines
os.environ['HADOOP_HOME'] = "C:\\winutils"
print os.environ['HADOOP_HOME']
winutils.exe are required for hadoop to perform hadoop related commands. please download
hadoop-common-2.2.0 zip file. winutils.exe can be found in bin folder. Extract the zip file and copy it in the local hadoop/bin folder.
I was facing the same problem. Removing the bin\ from the HADOOP_HOME path solved it for me. The path for HADOOP_HOME variable should look something like.
C:\dev\hadoop2.6\
System restart may be needed. In my case, restarting the IDE was sufficient.
As most answers here refer to pretty old versions of winutils, I will leave a link to the most comprehensive repository, which supports all versions of Hadoop including the most recent ones:
https://github.com/kontext-tech/winutils
(find the directory corresponding to your Hadoop version, or try the most recent one).
If you have admin permissions on you machine.
Put bin directory into C:\winutils
The whole path should be C:\winutils\bin\winutils.exe
Set HADOOP_HOME into C:\winutils
If you don't have admin permissions or want to put the binaries into user space.
Put bin directory into C:\Users\vryabtse\AppData\Local\Programs\winutils or similar user directory.
Set HADOOP_HOME value into path to this directory.
Set up HADOOP_HOME variable in windows to resolve the problem.
You can find answer in org/apache/hadoop/hadoop-common/2.2.0/hadoop-common-2.2.0-sources.jar!/org/apache/hadoop/util/Shell.java :
IOException from
public static final String getQualifiedBinPath(String executable)
throws IOException {
// construct hadoop bin path to the specified executable
String fullExeName = HADOOP_HOME_DIR + File.separator + "bin"
+ File.separator + executable;
File exeFile = new File(fullExeName);
if (!exeFile.exists()) {
throw new IOException("Could not locate executable " + fullExeName
+ " in the Hadoop binaries.");
}
return exeFile.getCanonicalPath();
}
HADOOP_HOME_DIR from
// first check the Dflag hadoop.home.dir with JVM scope
String home = System.getProperty("hadoop.home.dir");
// fall back to the system/user-global env variable
if (home == null) {
home = System.getenv("HADOOP_HOME");
}
Download desired version of hadoop folder (Say if you are installing spark on Windows then hadoop version for which your spark is built for) from this link as zip.
Extract the zip to desired directory.
You need to have directory of the form hadoop\bin (explicitly create such hadoop\bin directory structure if you want) with bin containing all the files contained in bin folder of the downloaded hadoop. This will contain many files such as hdfs.dll, hadoop.dll etc. in addition to winutil.exe.
Now create environment variable HADOOP_HOME and set it to <path-to-hadoop-folder>\hadoop. Then add ;%HADOOP_HOME%\bin; to PATH environment variable.
Open a "new command prompt" and try rerunning your command.
Download [winutils.exe]
From URL :
https://github.com/steveloughran/winutils/hadoop-version/bin
Past it under HADOOP_HOME/bin
Note : You should Set environmental variables:
User variable:
Variable: HADOOP_HOME
Value: Hadoop or spark dir
I used "hbase-1.3.0" and "hadoop-2.7.3" versions. Setting HADOOP_HOME environment variable and copying 'winutils.exe' file under HADOOP_HOME/bin folder solves the problem on a windows os.
Attention to set HADOOP_HOME environment to the installation folder of hadoop(/bin folder is not necessary for these versions).
Additionally I preferred using cross platform tool cygwin to settle linux os functionality (as possible as it can) because Hbase team recommend linux/unix env.
I was getting the same issue in windows. I fixed it by
Downloading hadoop-common-2.2.0-bin-master from link.
Create a user variable HADOOP_HOME in Environment variable and assign the path of hadoop-common bin directory as a value.
You can verify it by running hadoop in cmd.
Restart the IDE and Run it.
I recently got the same error message while running spark application on Intellij Idea. What I did was, I downloaded the winutils.exe that is compatible with the Spark version I was running and moved it to the Spark bin directory. Then in my Intellij, I edited the configuration.
The 'Environment variables' area was empty. So, I entered HADOOP_HOME = P:\spark-2.4.7-bin-hadoop2.7
Since, the winutils.exe is in the P:\spark-2.4.7-bin-hadoop2.7\bin directory, it will locate the file while running.
So, by setting HADOOP_HOME, the null would be the HADOOP_HOME directory. Complete path would be P:\spark-2.4.7-bin-hadoop2.7\bin\winutils.exe
That was how I resolved it

where did the configuration file stored in CDH4

I setup a CDH4
Now I can configure the hadoop on the web page.
I want to know where did the cdh put the configuration file on the local file system.
for example, I want to find the core-site.xml, but where is it?
By default, the installation of CDH has the conf directory located in
/etc/hadoop/
You could always use the following command to find the file:
$ sudo find / -name "core-site.xml"

Resources