I am trying to run hadoop as a root user, i executed namenode format command hadoop namenode -format. after that I tried to open hadoop daemons , but namenode is not starting. I run the command hadoop namenode -importCheckpoint and it gives foll. error:
14/09/15 01:25:55 INFO common.Storage: Storage directory /home/umaima/cloudera_namedir is not formatted.
14/09/15 01:25:55 INFO common.Storage: Formatting ...
14/09/15 01:25:55 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:336)
at org.apache.hadoop.hdfs.server.namenode.FSImage.doImportCheckpoint(FSImage.java:531)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:375)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
14/09/15 01:25:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at umaima-Lenovo-IdeaPad-S510p/127.0.1.1
************************************************************/
I am stuck in this. Any help is highly appreciated. Thanks in advance
Before formatting the namenode, delete the tmp folder(contains datanode and namenode) and then format the namenode.
And then start the hadoop services.
Related
Namenode for Fully Distributed Hadoop in Ubuntu mode will not stay open/ It starts and shutsdown with the error below. I tried a few things but nothing works. The namenode log is below and it automatically shuts down. Any help is appreciated.
Directory /usr/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.
2019-03-25 01:34:44,354 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at
I have already tried reformatting namenode
7952 SecondaryNameNode
7714 DataNode
23346 NodeManager
10555 Jps
23167 ResourceManager
kindly recheck files vim ~/.bashrc, core-site.xml, hdfs-site.xml
I tried to set up Hadoop 2.6.1 based on instructions from here
But my data node is not up. When I do JPS , I get only the below process
▶ jps
8406 ResourceManager
7744 NameNode
8527 NodeManager
8074 SecondaryNameNode
9121 Jps
DataNode Log:
2015-10-07 13:02:24,144 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /home/vinod/.hadoopdata/hdfs/datanode :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:652)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:490)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2341)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
2015-10-07 13:02:24,147 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/home/vinod/.hadoopdata/hdfs/datanode/"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2350)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2323)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2215)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2262)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2462)
2015-10-07 13:02:24,148 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-10-07 13:02:24,150 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at BBDSK0201/127.0.1.1
************************************************************/
Please help what's that I could be missing
1)Make sure the directory has right owner and permissions.
$ sudo chown -R hduser:hadoop /home/vinod/.hadoopdata/hdfs/datanode
2) Delete the contents given in tmp directory. It is the parameter given for hadoop.tmp.dir
3) Format the namenode.
Start all the process again. Hope this helps...
I'm trying to upgrade my 5 node hadoop cluster from 1.0 to 2.2.0. When I try to upgrade the namenode using hadoop-daemon.sh start namenode -upgrade command and check the log files I get the following error message.
015-03-13 10:02:24,549 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:347)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:335)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:388)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:471)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:483)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
2015-03-13 10:02:24,586 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-03-13 10:02:24,593 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at nn.cluster.com/192.168.1.75
************************************************************/
Please check your core-site.xml and it should contain valid namenode address.
I am trying to configure a pseudo hadoop 1 node cluster on my ubuntu machine. However when i give the following command
bin/start-all.sh
it start all the daemons but when i do jps,it does not give me namnode port and when i go to the namenode logs
The following message is displayed.
2013-04-26 11:59:09,927 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Incomplete HDFS URI, no host: hdfs://vikasXXX.XX.XX.XX:X000
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:85)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:314)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:310)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2013-04-26 11:59:09,929 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vikas/XXX.XX.XX.XX
************************************************************/
What could be the reason?
Thanks
I assume that you have masked and shared your URL
hdfs://vikasXXX.XX.XX.XX:X000
I think it is not recognizing your machine by name. Try using localhost and check if it works.
hdfs://localhost:8020
I encountered below mentioned error after installing Hadoop and executing hadoop namenode -format command.
Based on the displayed logs, I figured out that I need to update the "host" in the configuration. But, I am unable to find the exact location of the configuration file (.xml), which needs to be updated.
I am installing on Fedora on a single node. I am looking for your help in addressing this issue. Please point me to any specific link or documentation that could be helpful while debugging.
[hadoop#hadoop ~]$ hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
13/02/03 11:33:09 INFO namenode.NameNode: STARTUP_MSG:
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = java.net.UnknownHostException: hadoop: hadoop
13/02/03 11:34:27 INFO namenode.NameNode: SHUTDOWN_MSG: :
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: hadoop: hadoop
Configuration files are in $HADOOP_HOME/etc/hadoop, where $HADOOP_HOME probably is /usr/local/hadoop. There can be several files with your hostname, you should check there.
Also, you should check /etc/hosts.