Hadoop -mkdir : - Could not create the Java Virtual Machine - hadoop

I have configured the Hadoop 1.0.4 and started the following without any issue:
1. $ start-dfs.sh : -Works fine
2. $ start-mapred.sh : - Work fine
3. $ jps (Output is below)
Out put:
rahul#rahul-Inspiron-N4010:/usr/local/hadoop-1.0.4/bin$ jps
6964 DataNode
7147 SecondaryNameNode
6808 NameNode
7836 Jps
7254 JobTracker
7418 TaskTracker
But facing issue: While issuing command
rahul#rahul-Inspiron-N4010:/usr/local/hadoop-1.0.4/bin$ hadoop -mkdir /user
Getting following error
Unrecognized option: -mkdir
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I applied the patch : HDFS-1943.patch but not use full

Should be: hdfs dfs -mkdir /user
Consult documentation at http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html

fs option is missing
hadoop fs -mkdir /user

Related

I can't get the NameNode to start in Hadoop

I'm trying to run Hadoop on a single-node in a pseudo-distributed mode.
I'm using Ubuntu 20.04 on WSL and have Java 8.
When I run:
start-dfs.sh
start-yarn.sh
And then run:
jps
My system outputs the following:
1829 SecondaryNameNode
2549 Jps
1612 DataNode
2188 NodeManager
2045 ResourceManager
Why isn't it showing "NameNode"?
I've already tried deleting the tmp files with:
rm -Rf <tmp dir>
Then formatted the namenode:
bin/hdfs namenode -format
And yet the same output appears when I run jps
What am I doing wrong?

Connection refused even when all daemons are working on hadoop,

I am working on hadoop 2.X , on which when I run jps, it shows all the daemons running correctly.
[root#localhost Flume]# jps
3521 NodeManager
3058 DataNode
3252 SecondaryNameNode
4501 Jps
3419 ResourceManager
2957 NameNode
But when I run,
hadoop dfs -ls /
It says,
ls: Call From localhost.localdomain/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
please help me with it.
The command you gave "hadoop dfs -ls /" is not correct. Change the dfs to fs. This command can be specified in two ways:
hadoop fs -ls / /* this is the old style and is deprecated */
hdfs dfs -ls / /* this is the new style */
Note the difference in both commands. The 2nd part of 1st command should be fs and not dfs.

Folder Not Created with hadoop fs -mkdir

Hey I am installing HIVE in a Hadoop 2.0 Multi Node cluster ,and I am not able to Create folder using this command :
[hadoop#master ~]$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
16/07/19 14:20:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop#master ~]$ $HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse
16/07/19 14:24:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Importantly I am not able to find the Created Folder ? Where it will go and create I am not sure. Please help.
JPS for Hadoop is working fine:
[hadoop#master ~]$ jps
2977 ResourceManager
2613 DataNode
3093 NodeManager
2822 SecondaryNameNode
2502 NameNode
5642 Jps
The warning you are getting after running -mkdir command does not impact the Hadoop functionality. It's just a warning, just ignore it. See here for details.
About creating directories under root i.e. "/", it is just one-time activity and should be done by superuser. Once you create the root directories like "/tmp", "/user" etc., then you can create user specific foders like "/user/hduser" and own them using commands:
sudo -u hdfs hdfs dfs -mkdir /tmp
OR
sudo -u hdfs hdfs dfs -mkdir -p /user/hive/warehouse
Once you have the main folder ready, just own it with the user who will be using it:
sudo -u hdfs hdfs dfs -chown hduser:hadoop /user/hive/warehouse
If you want to find the files/directories created on HDFS, then you have to interact with HDFS filesystem using CLI commands only
e.g. hdfs dfs -ls /
The data which is created on HDFS has a physical location on your local filesystem also, but you'll not see that location as files and directories. Look for the dfs.namenode.name.dir and dfs.datanode.data.dir properties in 'hdfs-site.xml' under your installation, usually located at: "/usr/local/hadoop/etc/hadoop/hdfs-site.xml"

access file in datanode from namenode hadoop

I have configured the namenode in hadoopmaster and data node in hadoopslave.When start-dfs.sh in hadoopmaster the namenode in hadoopmaster and datanode in hadoopslave getting started.when i try to execute the command hdfs dfs -ls / in hadoopmaster i can't view the files that i had put in hadoopslave.
Note:I put a file in hadoopslave using hdfs dfs -put /sample.txt /
Correct me if I'am wrong!

hadoop file system list my own root directory

I met a very wired situation when I try to install single node hadoop yarn 2.2.0 on my mac. I follow the tutorial on this link: http://raseshmori.wordpress.com/2012/09/23/install-hadoop-2-0-1-yarn-nextgen/.
When I start the hadoop, and jps to check the status, it shows: (which means normal, I think)
5552 Jps
7162 ResourceManager
7512 Jps
7243 NodeManager
6962 DataNode
7060 SecondaryNameNode
6881 NameNode
However, after enter
hadoop fs -ls /
The files lists are the files in my own root but not the hadoop file system root. There must be some error when I set the hadoop that mix my own fs with the hdfs. Could any one give me a hint about it?
Use the following command for accessing HDFS
hadoop fs -ls hdfs://localhost:9000/
Or
Populate ${HADOOP_CONF_DIR}/core-site.xml as follows. If your doing so even without specifying hdfs:// URI you will be able to access HDFS.
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Add the following line at the starting of the file $HOME/yarn/hadoop-2.0.1-alpha/libexec/hadoop-config.sh
export HADOOP_CONF_DIR=$HOME/yarn/hadoop-2.0.1-alpha/etc/hadoop

Resources