Why hadoop hdfs command: hdfs dfsadmin -refreshNodes is not work? - hadoop

I try to re-read the hosts and exclude files to update the set of Datanodes that are allowed to connect to the Namenode, after I finished the configuration, I execute "hdfs dfsadmin -refreshNodes",but it does't work.
My hadoop version is 3.1.2 32bit, os is centos 7 64bit.
<property>
<name>dfs.hosts</name>
<value>/opt/install/hadoop-3.1.2/etc/hadoop/dfs.hosts</value>
</property>
after i execute the refresh command, the console display:
2019-07-07 09:31:15,468 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Refresh nodes successful

Related

The process of NameNode isn't present when executing jps

I'm new to the Hadoop ecosystem,
I installed hadooop 3.3.0 as a Pseudo-Distributed Mode.
The all application http://localhost:8088/ is working but to view name node of the application on http://localhost:9870/ i couldn't (This site can’t be reached).
$ jps
24553 Jps
20537 NodeManager
20429 ResourceManager
and
$ hadoop version
Hadoop 3.3.0
Source code repository https://github.com/apache/hadoop.git -r aa96f1871bfd858f9bac59cf2a81ec470da649af
Compiled by brahma on 2020-07-06T18:21Z
Compiled with protoc 3.7.1
From source with checksum 5dc29b802d6ccd77b262ef9d04d19c4
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.3.0.jar
I tried to restart the process but in vain
$ stop-all.sh
WARNING: Stopping all Apache Hadoop daemons as mhannani in 10 seconds.
WARNING: Use CTRL-C to abort.
Stopping namenodes on [HP]
Stopping datanodes
Stopping secondary namenodes [HP]
2021-01-06 16:42:07,540 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping nodemanagers
Stopping resourcemanager
format :
$ hdfs namenode -format
2021-01-06 16:44:14,683 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = HP/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.3.0
and then
$ start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as mhannani in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [HP]
Starting datanodes
Starting secondary namenodes [HP]
HP: ERROR: Cannot set priority of secondarynamenode process 29847
2021-01-06 16:45:38,266 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
Please how can i fix that, in order to access my hdfs file system from the browser as it was on earlier version of Hadoop on http://localhost:9870/50075 ?
Any help or advice would be appreciated, Thanks folks.
The issue was not setting correctly the namenode path, and datanode paths of my local file systems :
on $HADOOP_HOME/etc/hadoop/hdfs-site.xml :
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file://AbsolutePATH/TO/WHERE/THE/namenode/Should/be/stored</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file://The/same/for/dataNode</value>
</property>
</configuration>

Hadoop2.7.3: Cannot see DataNode/ResourceManager process after starting hdfs and yarn

I'm using mac and java version:
$java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
followed this link: https://dtflaneur.wordpress.com/2015/10/02/installing-hadoop-on-mac-osx-el-capitan/
I first brew install hadoop, config ssh connection and xml files as required, and
start-dfs.sh
start-yarn.sh
The screen output is like this:
$start-dfs.sh
17/05/06 09:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: namenode running as process 74213. Stop it first.
localhost: starting datanode, logging to /usr/local/Cellar/hadoop/2.7.3/libexec/logs/hadoop-x-datanode-xdeMacBook-Pro.local.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 74417. Stop it first.
17/05/06 09:58:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$start-dfs.sh
17/05/06 09:58:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: namenode running as process 74213. Stop it first.
localhost: starting datanode, logging to /usr/local/Cellar/hadoop/2.7.3/libexec/logs/hadoop-x-datanode-xdeMacBook-Pro.local.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 74417. Stop it first.
17/05/06 09:58:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Then using jps I cannot see "DataNode" and "ResourceManager". I suppose DataNode is hdfs module and ResourceManager is yarn module:
$jps
74417 SecondaryNameNode
75120 Jps
74213 NameNode
74539 ResourceManager
74637 NodeManager
I can list hdfs files:
$hdfs dfs -ls /
17/05/06 09:58:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x - x supergroup 0 2017-05-05 23:50 /user
But running the pi examples throws exception:
$hadoop jar /usr/local/Cellar/hadoop/2.7.3/libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar pi 2 5
Number of Maps = 2
Samples per Map = 5
17/05/06 10:19:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/06 10:19:49 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/x/QuasiMonteCarlo_1494037188550_135794067/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
I wonder if I missed any configuation, how can I make sure that they run successfully, and how to check or trouble shoot possible failure reasons?
Thanks.
I am too in learning phase yet. This error comes when there is no datanode available to read/write.
You can check Resource Manager using this URL: http://localhost:50070
Is there any datanode running or not.
For trouble shooting you can check logs generated under installation directory of hadoop . If you can share that logs i can try to help.

Hadoop: Incorrect configuration

Hi stackoverflow community,
so I've been wanting to install hadoop, but I have come to a problem.
I've looked at other approaches, but I still keep receiving. I am completely new to hadoop, so I don't really know where to go. I am on a macbook pro with El Capitan if relevant. Once I make sbin/start-dfs.sh I receive this:
sbin/start-dfs.sh
16/05/10 11:09:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
Password:
localhost: /usr/local/Cellar/hadoop/2.7.2/libexec/sbin/hadoop-daemon.sh: line 69: [: MacBook: integer expression expected
localhost: starting namenode, logging to /usr/local/Cellar/hadoop/2.7.2/libexec/logs/hadoop-name-namenode-name’s
localhost: Error: Could not find or load main class MacBook
The hadoop-daemon.sh is:
The relevant XMLs are as follow:
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
If anything is wanted I will freely provide. Thank you for all the help and I truly appreciate it, since I really want to start using Hadoop.
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_65.jdk/Contents/Home
export HADOOP_PREFIX=/usr/local/Cellar/hadoop
Hey so this is an update if anyone is considered: I now get this
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [myIP#]
New note: I am redoing the process with and refollowing this guide. Whether or not success is mine, I will post my update here :)!
zhongyaonan.com/hadoop-tutorial/…
Looks like your conf directory is not set properly try following steps
export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
hdfs namenode -format
hdfs getconf -namenodes
./start-dfs.sh

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable with hadoop-2.6.0

I have started working in hadoop, i am a beginner. I have succuefully install the hadoop-2.6.0 in ubuntu 15.04 64 bit.
The commond like start-all.sh, start-dfs.sh etc are working nicely.
I am facing the problem when i am trying to move the local file system to HDFS.
Like in copyFromLocal command:
hadoop dfs -copyFromLocal ~/Hadoop/test/text2.txt ~/Hadoop/test_hds/input.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/06/04 23:18:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
copyFromLocal: Call From royaljay-Inspiron-N4010/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Same problem in mkdir command:
hadoop dfs -put ~/test/test/test1.txt hd.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/06/03 20:49:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: Cannot create file/user/hduser/hd.txt.COPYING. Name node is in safe mode.
I have found many solutions, but no one is working out.
If anyone have idea about this please tell me.
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
You should not use hadoop dfs, instead use the following command:
hdfs dfs -copyFromLocal ...
Do not use ~, instead mention full path like /home/hadoop/Hadoop/test/text2.txt
Call From royaljay-Inspiron-N4010/127.0.1.1 to localhost:9000 failed
on connection exception: java.net.ConnectException: Connection
refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
127.0.1.1 will cause loopback problems. Remove the line with 127.0.1.1 from /etc/hosts.
NOTE: For copying files from local filesystem to HDFS, try using -put command instead of -copyFromLocal.

Getting Exception on "hadoop fs -ls /"

I run hadoop-2.0.5-alpha.
When I list hdfs files, I get this Exception:
bin/hadoop fs -ls /
13/07/07 18:47:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status;
My core-site.xml looks like that:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020</value>
</property>
</configuration>
What could be wrong?
If you have multpile versions of hadoop installed on your system, verify your PATH. You may be using the wrong version of hadoop as the client.
I ran into this problem when I had two versions of hadoop installed: hadoop-1.1.2 and hadoop-2.1.0-beta. It turned out that my path was incorrect and I was attempting to run the hadoop command from hadoop-1.1.2 against hadoop 2.1.0-beta.
In addition to your PATH, check the settings of your HADOOP_CONF_DIR or even HADOOP_HOME environment variables to be sure they are pointing to the correct directory for your hadoop 2 installation.

Resources