Hadoop fs relates to generic filesystem supported by hadoop But hdfs dfs related to hdfs only - hadoop

Hadoop fs relates to generic filesystem supported by hadoop But hdfs dfs related to hdfs only.
Then why following command is allowed
hdfs dfs -ls file:///
How can I access localfilesystem using hdfs dfs ?

Related

HDFS Namenode High Availability

I enabled the Namenode High Availability using ambari.
I want to verify the connection using dfs.nameservices (nameservice ID) before start the coding.
Is there any command line or tool to verifiy it?
You can use the normal HDFS CLI.
hdfs dfs -ls hdfs://nameservice/user
Which should also work the same as
hdfs dfs -ls hdfs:///user
Or giving your active namenode
hdfs dfs -ls hdfs://namenode-1:port/user
If you provide the standby namenode, it will say operation READ not supported in state standby

hadoop copy a local file to Hadoop SF error

i want to copy a local file in Hadoop FS. i run this command:
sara#ubuntu:/usr/lib/hadoop/hadoop-2.3.0/bin$ hadoop fs -copyFromLocal /home/sara/Downloads/CA-GrQc.txt /usr/lib/hadoop/hadoop-2.3.0/${HADOOP_HOME}/hdfs/namenode
and
sara#ubuntu:/usr/lib/hadoop/hadoop-2.3.0/bin$ hdfs dfs -copyFromLocal /home/sara/Downloads/CA-GrQc.txt /usr/lib/hadoop/hadoop-2.3.0/${HADOOP_HOME}/hdfs/namenode
and even if i run : hdfs dfs -ls
i get this error:
> WARN util.NativeCodeLoader: Unable to load native-hadoop library for
> your platform... using builtin-java classes where applicable
> copyFromLocal: `.': No such file or directory
i don't know why i get this error? Any idea please?
According to your input your Hadoop installation seems to be working fine. What is wrong, it that hadoop fs -copyFromLocal expect the directory HDFS directory as target directory, but not the local directory where the Hadoop stores its blocks.
So in you case the command should look like(for example):
sara#ubuntu:/usr/lib/hadoop/hadoop-2.3.0/bin$ hdfs dfs -copyFromLocal /home/sara/Downloads/CA-GrQc.txt /sampleDir/
Where the sampleDir is the directory you create with hadoop fs -mkdir command.

hadoop fs -ls results in "no such file or directory"

I have installed and configured Hadoop 2.5.2 for a 10 node cluster. 1 is acting as masternode and other nodes as slavenodes.
I have problem in executing hadoop fs commands. hadoop fs -ls command is working fine with HDFS URI. It gives message "ls: `.': No such file or directory" when used without HDFS URI
ubuntu#101-master:~$ hadoop fs -ls
15/01/30 17:03:49 WARN util.NativeCodeLoader: Unable to load native-hadoop
ibrary for your platform... using builtin-java classes where applicable
ls: `.': No such file or directory
ubuntu#101-master:~$
Whereas, executing the same command with HDFS URI
ubuntu#101-master:~$ hadoop fs -ls hdfs://101-master:50000/
15/01/30 17:14:31 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Found 3 items
drwxr-xr-x - ubuntu supergroup 0 2015-01-28 12:07 hdfs://101-master:50000/hvision-data
-rw-r--r-- 2 ubuntu supergroup 15512587 2015-01-28 11:50 hdfs://101-master:50000/testimage.seq
drwxr-xr-x - ubuntu supergroup 0 2015-01-30 17:03 hdfs://101-master:50000/wrodcount-in
ubuntu#101-master:~$
I am getting exception in MapReduce program due to this behavior. jarlib is referring to the HDFS file location, whereas, I want jarlib to refer to the jar files stored at the local file system on the Hadoop nodes.
The behaviour that you are seeing is expected, let me explain what's going on when you are working with hadoop fs commands.
The command's syntax is this: hadoop fs -ls [path]
By default, when you don't specify [path] for the above command, hadoop expands the path to /home/[username] in hdfs; where [username] gets replaced with linux username who is executing the command.
So, when you execute this command:
ubuntu#xad101-master:~$ hadoop fs -ls
the reason you are seeing the error is ls: '.': No such file or directory because hadoop is looking for this path /home/ubuntu, it seems like this path doesn't exist in hdfs.
The reason why this command:
ubuntu#101-master:~$ hadoop fs -ls hdfs://101-master:50000/
is working because, you have explicitly specified [path] and is the root of the hdfs. You can also do the same using this:
ubuntu#101-master:~$ hadoop fs -ls /
which automatically gets evaluated to the root of hdfs.
Hope, this clears the behaviour you are seeing while executing hadoop fs -ls command.
Hence, if you want to specify local file system path use file:/// url scheme.
this has to do with the missing home directory for the user. Once I created the home directory under the hdfs for the logged in user, it worked like a charm..
hdfs dfs -mkdir /user
hdfs dfs -mkdir /user/{loggedin user}
hdfs dfs -ls
this method fixed my problem.
The user directory in Hadoop is (in HDFS)
/user/<your operational system user>
If you get this error message it may be because you have not yet created your user directory within HDFS.
Use
hadoop fs -mkdir -p /user/<current o.p. user directory>
To see what is your current operational system user, use:
id -un
hadoop fs -ls it should start working...
There are a couple things at work here; based on "jarlib is referring to the HDFS file location", it sounds like you indeed have an HDFS path set as your fs.default.name, which is indeed the typical setup. So, when you type hadoop fs -ls, this is indeed trying to look inside HDFS, except it's looking in your current working directory, which should be something like hdfs://101-master:50000/user/ubuntu. The error message is unfortunately somewhat confusing since it doesn't tell you that . was interpreted to be that full path. If you hadoop fs -mkdir /user/ubuntu then hadoop fs -ls should start working.
This problem is unrelated to your "jarlib" problem; whenever you want to refer files explicitly stored in the local filesystem, but where the path goes through Hadoop's Path resolution, you simply need to add file:/// to force Hadoop to refer to the local filesystem. For example:
hadoop fs -ls file:///tmp
Try passing your jar file paths as fille file:///path/to/your/jarfile and it should work.
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
This error will be removed using this command in .bashrc file:
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/native"
------------------------------------------------------
/usr/local/hadoop is location where hadoop is install
-------------------------------------------------------

error while running any hadoop hdfs file system command

I am very new to hadoop. I am referring "hadoop for dummies" book.
I have setup a vm with following specs
hadoop version 2.0.6-alpha
bigtop
os centos
problem is while running any hdfs file system command I am getting following error
example command : hadoop hdfs dfs -ls
error : Could not find or load main class hdfs
Please advice
Regards,
Try running:
hadoop fs -ls
or
hdfs dfs -ls
what do they return?
fs and dfs are the same commands.
Difference between `hadoop dfs` and `hadoop fs`
Remove either hadoop or hdfs and the command should run.

Cloudera CDH4 - how come I can't browse the hdfs filesystem from the nodes?

I installed my test cluster using Cloudera Manager free.
I can only browse the filesystem from the main NameNode. When running hadoop dfs -ls only shows the local folder.
JPS shows the Jps, TaskTracker, DataNode on the nodes.
MapReduce tasks/jobs run fine on all the nodes as a cluster.
With my custom setup Hadoop cluster (without Cloudera), I can easily browse and manipulate the hdfs filesystem (eg. I can run hadoop dfs -mkdir test1 on all the nodes - but only on the NameNode in CDH4)
why is this?
Try using the command ./bin/hadoop fs -ls / for HDFS browsing.

Resources