why there is a need of hadoop commands in Pseudo-distributed mode? - hadoop

It might be a stupid question but I needed to know.
For example: Why do we need hadoop fs -ls command to list files? Instead why can't just ls be used?
If in pseudo-distributed mode, is that case part of filesystem is given to hadoop file system that is only accessible to hadoop namenode daemon...this is my guess. Please explain.

ls will list all file spaces available to your computer
You can set the fs.defaultFS property to be file:///, the default, then both will act the same, but this is not considered pseudodistributed mode.
Pseudodistributed node requires that you specify a list of datanode and namenode volumes on each respective system in the cluster, and hdfs dfs commands will only list those files that are known by the namenode.
And its called pseudodistributed only because it's a single node. Once you have that working, adding another node should be straightforward given appropriate networking connections

Related

Namenode and Jobtracker information on Hadoop cluster

How can i get the following information on the Hadoop Cluster ?
1. namenode and jobtracker name
2. list of all nodes with their roles on the cluster
To get namenode info:
hdfs getconf -confKey fs.defaultFS
For jobtracker
hdfs getconf -confKey yarn.resourcemanager.address.rm2
I am using cloudera based cluster and also working on EMR.
In both the clusters I can find the information from the configuration dir.
To get the namenode information go into core-site.xml file and look for the fs.defaultFS as #daemon12 said
Here is the straight way to get it.
For namenode information use the below command
cat /etc/hadoop/conf/core-site.xml | grep '8020'
Here is the result
<value>hdfs://10.872.22.1:8020</value>
The values inside the value tag is the name node information.
Similarly to get the jobtracker information do the below
cat /etc/hadoop/conf/yarn-site.xml | grep '8032'
Here is the result
<value>10.872.12.32:8032</value>
Again the jobtracker value is inside the value tag.
Generally the NN and JT information is used to run the Oozie jobs and this method will help you for that purpose.
DISCLAIMER: I am grepping the result of cat based on the namenode and jobtracker port number which is 8020 and 8032 respectively. This is widely known ports for NN and JT in Hadoop. If your organization uses a different one, please use that to get more appropriate result.
Along with the command-line way of getting information, you can get the similar information in the browser also:
http://<namenode>:50070 (For in general hadoop informtion)
http://<namenode>:50030 (For JobTracker related information)
These are default ports. You can check here for more information.
With the correct granted authorization, (like sudo -u hdfs ), you may try :
hdfs dfsadmin -report

Does changing the value of dfs.blocksizeaffect existing data

My Hadoop version is 2.5.2. I am changing my dfs.blocksize in hdfs-site.xml file on the master node. I have the following question:
1) Will this change affect the existing data in HDFS
2) Do I need to propogate this change to all he nodes in Hadoop cluster or only on the NameNode is sufficient
1) Will this change affect the existing data in HDFS
No, it will not. It will keep the old block size on the old files. In order for it to take the new block change, you need to rewrite the data. You can either do a hadoop fs -cp or a distcp on your data. The new copy will have the new block size and you can delete your old data.
2) Do I need to propogate this change to all he nodes in Hadoop cluster or only on the NameNode is sufficient?
I believe in this case you only need to change the NameNode. However, this is a very very bad idea. You need to keep all of your configuration files in sync for a number of good reasons. When you get more serious about your Hadoop deployment, you should probably start using something like Puppet or Chef to manage your configs.
Also, note that whenever you change a configuration, you need to restart the NameNode and DataNodes in order for them to change their behavior.
Interesting note: you can set the blocksize of individual files as you write them to overwrite the default block size. E.g., hadoop fs -D fs.local.block.size=134217728 -put a b
you should be making changes in hdfs-site.xml of all slaves also... dfs.block size should be consistent accross all datanodes.
ochanging the block size in hdfs-site.xml will only affect the new data.
which distribution you are using... by seeing your questions it looks like you are using apache distribution..easiest way i can find is write a shell script to first delete hdfs-site.xml in slaves like
ssh username#domain.com 'rm /some/hadoop/conf/hdfs-site.xml'
ssh username#domain2.com 'rm /some/hadoop/conf/hdfs-site.xml'
ssh username#domain3.com 'rm /some/hadoop/conf/hdfs-site.xml'
later copy the hdfs-site.xml from master to all the slaves
scp /hadoop/conf/hdfs-site.xml username#domain.com:/hadoop/conf/
scp /hadoop/conf/hdfs-site.xml username#domain2.com:/hadoop/conf/
scp /hadoop/conf/hdfs-site.xml username#domain3.com:/hadoop/conf/

Explanation of the hadoop file system

Can any one help me understand the data storage concept of hadoop?
As I understand it, hadoop deals with fs image and data blocks, and fsimage and edit logs paths are stored hdfs-site.xml. But what about the data blocks? Can anyone help me in this? I am little bit confused where the /user and /tmp dir is actually present in the filesystem.
I used this link to set up a single node hadoop cluster: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
Files are split into blocks and stored in the Hadoop Distributed File System (HDFS). Consult the HDFS module of Yahoo's Hadoop Tutorial for a description of HDFS. The directories stored in HDFS can be viewed by typing the following command into a terminal: hadoop dfs -ls
The Namenode's FSImage keeps track of which Datanode has which files. In the hdfs-site.xml file, the configuration 'dfs.data.dir' defines where the datanode stores the underlying files on the filesystem. This can be a comma separated list of directories (think multiple disks).

Need help adding multiple DataNodes in pseudo-distributed mode (one machine), using Hadoop-0.18.0

I am a student, interested in Hadoop and started to explore it recently.
I tried adding an additional DataNode in the pseudo-distributed mode but failed.
I am following the Yahoo developer tutorial and so the version of Hadoop I am using is hadoop-0.18.0
I tried to start up using 2 methods I found online:
Method 1 (link)
I have a problem with this line
bin/hadoop-daemon.sh --script bin/hdfs $1 datanode $DN_CONF_OPTS
--script bin/hdfs doesn't seem to be valid in the version I am using. I changed it to --config $HADOOP_HOME/conf2 with all the configuration files in that directory, but when the script is ran it gave the error:
Usage: Java DataNode [-rollback]
Any idea what does the error mean? The log files are created but DataNode did not start.
Method 2 (link)
Basically I duplicated conf folder to conf2 folder, making necessary changes documented on the website to hadoop-site.xml and hadoop-env.sh. then I ran the command
./hadoop-daemon.sh --config ..../conf2 start datanode
it gives the error:
datanode running as process 4190. stop it first.
So I guess this is the 1st DataNode that was started, and the command failed to start another DataNode.
Is there anything I can do to start additional DataNode in the Yahoo VM Hadoop environment? Any help/advice would be greatly appreciated.
Hadoop start/stop scripts use /tmp as a default directory for storing PIDs of already started daemons. In your situation, when you start second datanode, startup script finds /tmp/hadoop-someuser-datanode.pid file from the first datanode and assumes that the datanode daemon is already started.
The plain solution is to set HADOOP_PID_DIR env variable to something else (but not /tmp). Also do not forget to update all network port numbers in conf2.
The smart solution is start a second VM with hadoop environment and join them in a single cluster. It's the way hadoop is intended to use.

How to Switch between namenodes in hadoop?

Pseudomode Cluster:
Suppose first time I created a namenode on Machine "A" with name "Root1".
This will create a HDFS on tha machine.
Now i copy some file to HDFS using copyFromLocal and do some mapreduce.
Now i need to change some /conf files.
I'll change config file and to make them effective I formatted namenode with name "Root2".
If i browse the HDFS , it will be empty (means it will not contain those which copied earlier for "Root1").
If I want to see old file (for "Root1"), is there any way to switch to that HDFS or namenode (Root2 to Root1 ) ??
To be clear. Did you launch the another namenode on your machine ?
Type sudo jps in console or http://localhost:50070 in browser and check if you have more than one datanode. If there is just one node you lost your data from HDFS. If you have two namenodes you can check the filesystem in Internet browser on http://localhost:50070.
Here is instruction how to launch more than one datanode on one machine.

Resources