I have a fairly small setup (HDP 2.6) with roughly 1429 blocks on a 15 TB HDD. The system has 512 GB RAM and 128 cores (256 threads).
Over last few days, I've seen the startup of entire HDP setup go from about 10 minutes to several hours. The culprit turned out to be the NameNode.When the box was first setup without any data, the entire HDP + HCP setup would startup in about 10 minutes (including data and name nodes). We start testing with large volumes of data and over time our block went over 23 million. At this point the system took around 3 hours to start. This was mostly due to NameNode startup time, which seems understandable given the large number of blocks.
However, even after deleting all the folders/blocks and leaving behind just 1429 blocks, the system is still taking over 50 minutes to start name node and come out of Safe Mode automatically.
The DataNode logs pause after the Replica Cache line below and then start displaying "Detected pause in JVM or host machine (eg GC)".
************************************************************/
2019-10-29 00:30:01,711 INFO datanode.DataNode (LogAdapter.java:info(47)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = xxxx.corp.com/scrambled.private.ip.address
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3.2.6.5.1100-53
STARTUP_MSG: classpath = removed for brevity
STARTUP_MSG: build = git#github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1; compiled by 'jenkins' on 2019-03-13T15:40Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2019-10-29 00:30:02,253 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/hadoop/hdfs/data/
2019-10-29 00:30:04,189 INFO datanode.BlockScanner (BlockScanner.java:<init>(180)) - Initialized block scanner with targetBytesPerSec 1048576
2019-10-29 00:30:04,193 INFO common.Util (Util.java:isDiskStatsEnabled(111)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-10-29 00:30:04,197 INFO datanode.DataNode (DataNode.java:<init>(444)) - File descriptor passing is enabled.
2019-10-29 00:30:04,197 INFO datanode.DataNode (DataNode.java:<init>(455)) - Configured hostname is xxxx.corp.com
2019-10-29 00:30:04,197 INFO common.Util (Util.java:isDiskStatsEnabled(111)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-10-29 00:30:04,198 WARN conf.Configuration (Configuration.java:getTimeDurationHelper(1659)) - No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS
2019-10-29 00:30:04,204 INFO datanode.DataNode (DataNode.java:startDataNode(1251)) - Starting DataNode with maxLockedMemory = 0
2019-10-29 00:30:04,221 INFO datanode.DataNode (DataNode.java:initDataXceiver(1028)) - Opened streaming server at /0.0.0.0:50010
2019-10-29 00:30:04,223 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwith is 6250000 bytes/s
2019-10-29 00:30:04,223 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 5
2019-10-29 00:30:04,225 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwith is 6250000 bytes/s
2019-10-29 00:30:04,225 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 5
2019-10-29 00:30:04,226 INFO datanode.DataNode (DataNode.java:initDataXceiver(1043)) - Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket
2019-10-29 00:30:04,296 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2019-10-29 00:30:04,304 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(296)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-10-29 00:30:04,308 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.datanode is not defined
2019-10-29 00:30:04,313 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(788)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-10-29 00:30:04,315 INFO http.HttpServer2 (HttpServer2.java:addFilter(763)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2019-10-29 00:30:04,337 INFO http.HttpServer2 (HttpServer2.java:bindListener(986)) - Jetty bound to port 43272
2019-10-29 00:30:04,338 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
2019-10-29 00:30:04,511 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:43272
2019-10-29 00:30:07,643 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(233)) - Listening HTTP traffic on /0.0.0.0:50075
2019-10-29 00:30:07,647 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor
2019-10-29 00:30:08,366 INFO datanode.DataNode (DataNode.java:startDataNode(1277)) - dnUserName = hdfs
2019-10-29 00:30:08,366 INFO datanode.DataNode (DataNode.java:startDataNode(1278)) - supergroup = hdfs
2019-10-29 00:30:08,579 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-10-29 00:30:08,734 INFO ipc.Server (Server.java:run(821)) - Starting Socket Reader #1 for port 8010
2019-10-29 00:30:09,244 INFO datanode.DataNode (DataNode.java:initIpcServer(941)) - Opened IPC server at /0.0.0.0:8010
2019-10-29 00:30:09,258 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(152)) - Refresh request received for nameservices: null
2019-10-29 00:30:09,274 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(201)) - Starting BPOfferServices for nameservices: <default>
2019-10-29 00:30:09,430 INFO datanode.DataNode (BPServiceActor.java:run(761)) - Block pool <registering> (Datanode Uuid unassigned) service to xxxx.corp.com/scrambled.private.ip.address:8020 starting to offer service
2019-10-29 00:30:09,434 INFO ipc.Server (Server.java:run(1064)) - IPC Server Responder: starting
2019-10-29 00:30:09,434 INFO ipc.Server (Server.java:run(900)) - IPC Server listener on 8010: starting
2019-10-29 00:30:10,930 INFO common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(384)) - Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2019-10-29 00:30:10,962 INFO common.Storage (Storage.java:tryLock(776)) - Lock on /hadoop/hdfs/data/in_use.lock acquired by nodename 210295#xxxx.corp.com
2019-10-29 00:30:11,121 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(250)) - Analyzing storage directories for bpid BP-814497463-127.0.0.1-1558792659773
2019-10-29 00:30:11,121 INFO common.Storage (Storage.java:lock(735)) - Locking is disabled for /hadoop/hdfs/data/current/BP-814497463-127.0.0.1-1558792659773
2019-10-29 00:30:11,139 INFO datanode.DataNode (DataNode.java:initStorage(1546)) - Setting up storage: nsid=875919329;bpid=BP-814497463-127.0.0.1-1558792659773;lv=-56;nsInfo=lv=-63;cid=CID-49b9105e-fc0d-4ea4-9d2f-caceb95ce4bb;nsid=875919329;c=0;bpid=BP-814497463-127.0.0.1-1558792659773;dnuuid=0aff4a22-3f1a-485b-9aec-46fd881dfab0
2019-10-29 00:30:11,523 INFO impl.FsDatasetImpl (FsVolumeList.java:addVolume(295)) - Added new volume: DS-ea7ed3be-90ad-4424-a00c-577601814d81
2019-10-29 00:30:11,523 INFO impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(426)) - Added volume - /hadoop/hdfs/data/current, StorageType: DISK
2019-10-29 00:30:11,527 INFO impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2203)) - Registered FSDatasetState MBean
2019-10-29 00:30:11,711 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for /hadoop/hdfs/data/current
2019-10-29 00:30:11,719 INFO checker.DatasetVolumeChecker (DatasetVolumeChecker.java:checkAllVolumes(210)) - Scheduled health check for volume /hadoop/hdfs/data/current
2019-10-29 00:30:11,721 INFO impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2686)) - Adding block pool BP-814497463-127.0.0.1-1558792659773
2019-10-29 00:30:11,722 INFO impl.FsDatasetImpl (FsVolumeList.java:run(392)) - Scanning block pool BP-814497463-127.0.0.1-1558792659773 on volume /hadoop/hdfs/data/current...
2019-10-29 00:30:11,898 INFO impl.FsDatasetImpl (BlockPoolSlice.java:loadDfsUsed(251)) - Cached dfsUsed found for /hadoop/hdfs/data/current/BP-814497463-127.0.0.1-1558792659773/current: 414855315456
2019-10-29 00:30:11,901 INFO impl.FsDatasetImpl (FsVolumeList.java:run(397)) - Time taken to scan block pool BP-814497463-127.0.0.1-1558792659773 on /hadoop/hdfs/data/current: 178ms
2019-10-29 00:30:11,901 INFO impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(423)) - Total time to scan all replicas for block pool BP-814497463-127.0.0.1-1558792659773: 180ms
2019-10-29 00:30:11,906 INFO impl.FsDatasetImpl (FsVolumeList.java:run(188)) - Adding replicas to map for block pool BP-814497463-127.0.0.1-1558792659773 on volume /hadoop/hdfs/data/current...
2019-10-29 00:30:11,906 INFO impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(738)) - Replica Cache file: /hadoop/hdfs/data/current/BP-814497463-127.0.0.1-1558792659773/current/replicas doesn't exist
2019-10-29 00:31:24,607 INFO timeline.HadoopTimelineMetricsSink
The corresponding NameNode log shows the following and keeps repeating "The reported blocks 0 needs additional 1429 blocks to reach the threshold 1.0000 of total blocks 1428."
***********************************************/
2019-10-29 00:30:20,165 INFO namenode.NameNode (LogAdapter.java:info(47)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = xxxx.corp.com/scrambled.private.ip.address
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3.2.6.5.1100-53
STARTUP_MSG: classpath = removed for brevity
STARTUP_MSG: build = git#github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1; compiled by 'jenkins' on 2019-03-13T15:40Z
STARTUP_MSG: java = 1.8.0_112
***************/
2019-10-29 00:30:20,176 INFO namenode.NameNode (NameNode.java:createNameNode(1624)) - createNameNode []
2019-10-29 00:30:20,747 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://xxxx.corp.com:8020
2019-10-29 00:30:20,748 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use xxxx.corp.com:8020 to access this namenode/service.
2019-10-29 00:30:20,866 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor
2019-10-29 00:30:20,874 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1803)) - Starting Web-server for hdfs at: http://xxxx.corp.com:50070
2019-10-29 00:30:20,923 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(296)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-10-29 00:30:20,927 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2019-10-29 00:30:20,931 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(788)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-10-29 00:30:20,933 INFO http.HttpServer2 (HttpServer2.java:addFilter(763)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2019-10-29 00:30:20,933 INFO http.HttpServer2 (HttpServer2.java:addFilter(771)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-10-29 00:30:20,933 INFO http.HttpServer2 (HttpServer2.java:addFilter(771)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-10-29 00:30:20,934 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-10-29 00:30:20,953 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(93)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-10-29 00:30:20,954 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(687)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-10-29 00:30:20,961 INFO http.HttpServer2 (HttpServer2.java:bindListener(986)) - Jetty bound to port 50070
2019-10-29 00:30:20,962 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
2019-10-29 00:30:20,986 WARN mortbay.log (Slf4jLog.java:warn(76)) - Can't reuse /tmp/Jetty_xxxx_corp_com_50070_hdfs____ggu70m, using /tmp/Jetty_xxxx_corp_com_50070_hdfs____ggu70m_2845921744604868870
2019-10-29 00:30:21,121 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup#xxxx.corp.com:50070
2019-10-29 00:30:21,143 WARN common.Util (Util.java:stringAsURI(57)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-10-29 00:30:21,143 WARN common.Util (Util.java:stringAsURI(57)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-10-29 00:30:21,143 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(690)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-10-29 00:30:21,143 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(695)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-10-29 00:30:21,148 WARN common.Util (Util.java:stringAsURI(57)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-10-29 00:30:21,148 WARN common.Util (Util.java:stringAsURI(57)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-10-29 00:30:21,153 WARN common.Storage (NNStorage.java:setRestoreFailedStorage(208)) - set restore failed storage to true
2019-10-29 00:30:21,172 INFO namenode.FSEditLog (FSEditLog.java:newInstance(225)) - Edit logging is async:false
2019-10-29 00:30:21,176 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(759)) - No KeyProvider found.
2019-10-29 00:30:21,176 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(765)) - Enabling async auditlog
2019-10-29 00:30:21,178 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(769)) - fsLock is fair:false
2019-10-29 00:30:21,204 INFO blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(91)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2019-10-29 00:30:21,207 INFO common.Util (Util.java:isDiskStatsEnabled(111)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-10-29 00:30:21,214 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(274)) - dfs.block.invalidate.limit=1000
2019-10-29 00:30:21,214 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(280)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2019-10-29 00:30:21,215 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2019-10-29 00:30:21,215 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2019 Oct 29 01:30:21
2019-10-29 00:30:21,217 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2019-10-29 00:30:21,217 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-10-29 00:30:21,218 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1011.3 MB = 20.2 MB
2019-10-29 00:30:21,218 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries
2019-10-29 00:30:21,231 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(409)) - dfs.block.access.token.enable=true
2019-10-29 00:30:21,231 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(430)) - dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2019-10-29 00:30:21,354 INFO blockmanagement.BlockManager (BlockManager.java:<init>(395)) - defaultReplication = 1
2019-10-29 00:30:21,354 INFO blockmanagement.BlockManager (BlockManager.java:<init>(396)) - maxReplication = 50
2019-10-29 00:30:21,354 INFO blockmanagement.BlockManager (BlockManager.java:<init>(397)) - minReplication = 1
2019-10-29 00:30:21,354 INFO blockmanagement.BlockManager (BlockManager.java:<init>(398)) - maxReplicationStreams = 2
2019-10-29 00:30:21,354 INFO blockmanagement.BlockManager (BlockManager.java:<init>(399)) - replicationRecheckInterval = 3000
2019-10-29 00:30:21,354 INFO blockmanagement.BlockManager (BlockManager.java:<init>(400)) - encryptDataTransfer = false
2019-10-29 00:30:21,354 INFO blockmanagement.BlockManager (BlockManager.java:<init>(401)) - maxNumBlocksToLog = 1000
2019-10-29 00:30:21,360 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(790)) - fsOwner = hdfs (auth:SIMPLE)
2019-10-29 00:30:21,360 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(791)) - supergroup = hdfs
2019-10-29 00:30:21,360 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(792)) - isPermissionEnabled = true
2019-10-29 00:30:21,360 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(803)) - HA Enabled: false
2019-10-29 00:30:21,361 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(843)) - Append Enabled: true
2019-10-29 00:30:21,388 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2019-10-29 00:30:21,388 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-10-29 00:30:21,388 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1011.3 MB = 10.1 MB
2019-10-29 00:30:21,389 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^20 = 1048576 entries
2019-10-29 00:30:21,393 INFO namenode.FSDirectory (FSDirectory.java:<init>(256)) - ACLs enabled? false
2019-10-29 00:30:21,393 INFO namenode.FSDirectory (FSDirectory.java:<init>(260)) - XAttrs enabled? true
2019-10-29 00:30:21,393 INFO namenode.FSDirectory (FSDirectory.java:<init>(268)) - Maximum size of an xattr: 16384
2019-10-29 00:30:21,393 INFO namenode.NameNode (FSDirectory.java:<init>(321)) - Caching file names occuring more than 10 times
2019-10-29 00:30:21,399 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2019-10-29 00:30:21,399 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-10-29 00:30:21,400 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1011.3 MB = 2.5 MB
2019-10-29 00:30:21,400 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^18 = 262144 entries
2019-10-29 00:30:21,402 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5582)) - dfs.namenode.safemode.threshold-pct = 1.0
2019-10-29 00:30:21,402 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5583)) - dfs.namenode.safemode.min.datanodes = 0
2019-10-29 00:30:21,402 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5584)) - dfs.namenode.safemode.extension = 30000
2019-10-29 00:30:21,405 INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-10-29 00:30:21,405 INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10
2019-10-29 00:30:21,405 INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-10-29 00:30:21,408 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(971)) - Retry cache on namenode is enabled
2019-10-29 00:30:21,408 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(979)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-10-29 00:30:21,410 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2019-10-29 00:30:21,411 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-10-29 00:30:21,411 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1011.3 MB = 310.7 KB
2019-10-29 00:30:21,411 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^15 = 32768 entries
2019-10-29 00:30:21,456 INFO common.Storage (Storage.java:tryLock(776)) - Lock on /home/hadoop/hdfs/namenode/in_use.lock acquired by nodename 211070#xxxx.corp.com
2019-10-29 00:30:21,503 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(388)) - Recovering unfinalized segments in /home/hadoop/hdfs/namenode/current
2019-10-29 00:30:21,527 INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /home/hadoop/hdfs/namenode/current/edits_inprogress_0000000000199282266 -> /home/hadoop/hdfs/namenode/current/edits_0000000000199282266-0000000000199282266
2019-10-29 00:30:21,532 INFO namenode.FSImage (FSImage.java:loadFSImageFile(745)) - Planning to load image: FSImageFile(file=/home/hadoop/hdfs/namenode/current/fsimage_0000000000199282232, cpktTxId=0000000000199282232)
2019-10-29 00:30:21,562 INFO namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(257)) - Loading 1993 INodes.
2019-10-29 00:30:21,724 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(184)) - Loaded FSImage in 0 seconds.
2019-10-29 00:30:21,725 INFO namenode.FSImage (FSImage.java:loadFSImage(911)) - Loaded image for txid 199282232 from /home/hadoop/hdfs/namenode/current/fsimage_0000000000199282232
2019-10-29 00:30:21,725 INFO namenode.FSImage (FSImage.java:loadEdits(849)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#63fd4873 expecting start txid #199282233
2019-10-29 00:30:21,726 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(142)) - Start loading edits file /home/hadoop/hdfs/namenode/current/edits_0000000000199282233-0000000000199282265
2019-10-29 00:30:21,729 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/home/hadoop/hdfs/namenode/current/edits_0000000000199282233-0000000000199282265' to transaction ID 199282233
2019-10-29 00:30:21,752 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(145)) - Edits file /home/hadoop/hdfs/namenode/current/edits_0000000000199282233-0000000000199282265 of size 1048576 edits # 33 loaded in 0 seconds
2019-10-29 00:30:21,752 INFO namenode.FSImage (FSImage.java:loadEdits(849)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#1e11bc55 expecting start txid #199282266
2019-10-29 00:30:21,752 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(142)) - Start loading edits file /home/hadoop/hdfs/namenode/current/edits_0000000000199282266-0000000000199282266
2019-10-29 00:30:21,752 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/home/hadoop/hdfs/namenode/current/edits_0000000000199282266-0000000000199282266' to transaction ID 199282233
2019-10-29 00:30:21,753 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(145)) - Edits file /home/hadoop/hdfs/namenode/current/edits_0000000000199282266-0000000000199282266 of size 1048576 edits # 1 loaded in 0 seconds
2019-10-29 00:30:21,753 INFO namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1083)) - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2019-10-29 00:30:21,754 INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(1294)) - Starting log segment at 199282267
2019-10-29 00:30:21,880 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 8 entries 214 lookups
2019-10-29 00:30:21,881 INFO namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(731)) - Finished loading FSImage in 465 msecs
2019-10-29 00:30:22,002 INFO namenode.NameNode (NameNodeRpcServer.java:<init>(428)) - RPC server is binding to xxxx.corp.com:8020
2019-10-29 00:30:22,007 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-10-29 00:30:22,015 INFO ipc.Server (Server.java:run(821)) - Starting Socket Reader #1 for port 8020
2019-10-29 00:30:22,049 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(6517)) - Registered FSNamesystemState MBean
2019-10-29 00:30:22,050 WARN common.Util (Util.java:stringAsURI(57)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-10-29 00:30:22,064 INFO namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(139)) - Number of blocks under construction: 0
2019-10-29 00:30:22,065 INFO hdfs.StateChange (FSNamesystem.java:reportStatus(5952)) - STATE* Safe mode ON.
The reported blocks 0 needs additional 1429 blocks to reach the threshold 1.0000 of total blocks 1428.
The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
2019-10-29 00:30:22,075 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(401)) - Number of failed storage changes from 0 to 0
2019-10-29 00:30:22,077 INFO block.BlockTokenSecretManager (BlockTokenSecretManager.java:updateKeys(222)) - Updating block keys
2019-10-29 00:30:22,095 INFO ipc.Server (Server.java:run(1064)) - IPC Server Responder: starting
2019-10-29 00:30:22,095 INFO ipc.Server (Server.java:run(900)) - IPC Server listener on 8020: starting
2019-10-29 00:30:22,115 INFO namenode.NameNode (NameNode.java:startCommonServices(885)) - NameNode RPC up at: xxxx.corp.com/scrambled.private.ip.address:8020
2019-10-29 00:30:22,116 INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1191)) - Starting services required for active state
2019-10-29 00:30:22,116 INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(708)) - Initializing quota with 4 thread(s)
2019-10-29 00:30:22,127 INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(717)) - Quota initialization completed in 11 milliseconds
name space=1995
storage space=6473571992
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
2019-10-29 00:30:22,131 INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(161)) - Starting CacheReplicationMonitor with interval 30000 milliseconds
2019-10-29 00:30:22,525 INFO fs.TrashPolicyDefault (TrashPolicyDefault.java:<init>(228)) - The configured checkpoint interval is 0 minutes. Using an interval of 60 minutes that is used for deletion instead
2019-10-29 00:31:52,817 INFO ipc.Server (Server.java:logException(2428)) - IPC Server handler 29 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from scrambled.private.ip.address:55080 Call#143 Retry#0: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive/anonymous/393adcb5-cd66-43a7-ab38-e759f5daf88e. Name node is in safe mode.
The reported blocks 0 needs additional 1429 blocks to reach the threshold 1.0000 of total blocks 1428.
The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
What exactly is going on and how do I go about troubleshooting this? I tried increasing the HeapSize for both NameNode and DataNode as well. The GC message from DataNode disappear but I still see them in NameNode logs when it reads iNODES.
Any help will be greatly appreciated.
Related
I am new to Hadoop, so I would really appreciate any feedback on this issue.
The Hadoop setup seems fine. I am able to start it, but when I checked the web UI at: http://localhost:50070 or http://localhost:9870 it shows the site can't be reached. Similarly, to check Yarn with the web UI http://localhost:8088, I had the same problem.
command jps shows the following details:
50714 SecondaryNameNode
88442
51756 Jps
50589 DataNode
Namenode, ResourceManager, NodeManager are missing.
I have tried changing the port configuration, didn't help.
Reference: http://localhost:50070 does not work HADOOP
hadoop web UI at http://localhost:50070/ doesnt work
$ ./start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [Maggies-MacBook-Pro.local]
2019-09-01 17:33:33,523 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sbin ./start-yarn.sh
Starting resourcemanager
Starting nodemanagers
After reformating namenode and start-all.sh:
sbin ./start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as zxiao in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [Maggies-MacBook-Pro.local]
2019-09-02 09:19:31,657 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
sbin jps
98359 SecondaryNameNode
99014 Jps
98232 DataNode
88442
Still cannot get the namenode started. The web UI still won't show up.
Update Here is the log file for the namenode:
2019-09-02 10:57:12,784 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-09-02 10:57:12,850 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2019-09-02 10:57:12,965 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2019-09-02 10:57:13,089 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2019-09-02 10:57:13,090 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2019-09-02 10:57:13,112 INFO org.apache.hadoop.hdfs.server.namenode.NameNodeUtils: fs.defaultFS is hdfs://localhost:8020
2019-09-02 10:57:13,112 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients should use localhost:8020 to access this namenode/service.
2019-09-02 10:57:13,134 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-09-02 10:57:13,209 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2019-09-02 10:57:13,226 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
2019-09-02 10:57:13,235 INFO org.eclipse.jetty.util.log: Logging initialized #839ms
2019-09-02 10:57:13,294 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-09-02 10:57:13,302 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2019-09-02 10:57:13,306 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-09-02 10:57:13,307 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2019-09-02 10:57:13,307 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-09-02 10:57:13,307 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-09-02 10:57:13,320 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-09-02 10:57:13,320 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-09-02 10:57:13,333 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 9870
2019-09-02 10:57:13,333 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-05T10:11:56-07:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-09-02 10:57:13,350 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler#2f2bf0e2{/logs,file:///usr/local/Cellar/hadoop/3.1.2/libexec/logs/,AVAILABLE}
2019-09-02 10:57:13,351 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler#21ec5d87{/static,file:///usr/local/Cellar/hadoop/3.1.2/libexec/share/hadoop/hdfs/webapps/static/,AVAILABLE}
2019-09-02 10:57:13,404 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext#4fdf8f12{/,file:///usr/local/Cellar/hadoop/3.1.2/libexec/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2019-09-02 10:57:13,409 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector#5710768a{HTTP/1.1,[http/1.1]}{0.0.0.0:9870}
2019-09-02 10:57:13,409 INFO org.eclipse.jetty.server.Server: Started #1013ms
2019-09-02 10:57:13,532 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-09-02 10:57:13,532 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-09-02 10:57:13,559 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true
2019-09-02 10:57:13,567 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null
2019-09-02 10:57:13,568 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2019-09-02 10:57:13,569 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-09-02 10:57:13,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = zxiao (auth:SIMPLE)
2019-09-02 10:57:13,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2019-09-02 10:57:13,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2019-09-02 10:57:13,593 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2019-09-02 10:57:13,622 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-09-02 10:57:13,630 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-09-02 10:57:13,630 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-09-02 10:57:13,634 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-09-02 10:57:13,634 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2019 Sep 02 10:57:13
2019-09-02 10:57:13,635 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2019-09-02 10:57:13,635 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,636 INFO org.apache.hadoop.util.GSet: 2.0% max memory 4 GB = 81.9 MB
2019-09-02 10:57:13,636 INFO org.apache.hadoop.util.GSet: capacity = 2^23 = 8388608 entries
2019-09-02 10:57:13,657 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable = false
2019-09-02 10:57:13,662 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2019-09-02 10:57:13,678 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2019-09-02 10:57:13,688 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2019-09-02 10:57:13,688 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,689 INFO org.apache.hadoop.util.GSet: 1.0% max memory 4 GB = 41.0 MB
2019-09-02 10:57:13,689 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10 times
2019-09-02 10:57:13,702 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-09-02 10:57:13,703 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: SkipList is disabled
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: 0.25% max memory 4 GB = 10.2 MB
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2019-09-02 10:57:13,712 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-09-02 10:57:13,712 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-09-02 10:57:13,712 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-09-02 10:57:13,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2019-09-02 10:57:13,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 4 GB = 1.2 MB
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: capacity = 2^17 = 131072 entries
2019-09-02 10:57:13,727 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/in_use.lock acquired by nodename 25057#Maggies-MacBook-Pro.local
2019-09-02 10:57:13,743 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current
2019-09-02 10:57:13,748 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2019-09-02 10:57:13,792 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2019-09-02 10:57:13,809 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2019-09-02 10:57:13,810 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/fsimage_0000000000000000000
2019-09-02 10:57:13,812 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#5c748168 expecting start txid #1
2019-09-02 10:57:13,813 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000001-0000000000000000002 maxTxnsToRead = 9223372036854775807
2019-09-02 10:57:13,815 INFO org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream: Fast-forwarding stream '/usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000001-0000000000000000002' to transaction ID 1
2019-09-02 10:57:13,826 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000001-0000000000000000002 of size 42 edits # 2 loaded in 0 seconds
2019-09-02 10:57:13,826 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2019-09-02 10:57:13,826 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2019-09-02 10:57:13,910 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2019-09-02 10:57:13,911 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 193 msecs
2019-09-02 10:57:14,012 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to localhost:8020
2019-09-02 10:57:14,017 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-09-02 10:57:14,023 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020
2019-09-02 10:57:14,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
2019-09-02 10:57:14,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2019-09-02 10:57:14,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3, 3
2019-09-02 10:57:14,175 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 2 Number of syncs: 3 SyncTimes(ms): 21
2019-09-02 10:57:14,177 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_inprogress_0000000000000000003 -> /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000003-0000000000000000004
2019-09-02 10:57:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: FSEditLogAsync was interrupted, exiting
2019-09-02 10:57:14,178 INFO org.apache.hadoop.ipc.Server: Stopping server on 8020
2019-09-02 10:57:14,198 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2019-09-02 10:57:14,198 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2019-09-02 10:57:14,201 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext#4fdf8f12{/,null,UNAVAILABLE}{/hdfs}
2019-09-02 10:57:14,204 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector#5710768a{HTTP/1.1,[http/1.1]}{0.0.0.0:9870}
2019-09-02 10:57:14,204 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#21ec5d87{/static,file:///usr/local/Cellar/hadoop/3.1.2/libexec/share/hadoop/hdfs/webapps/static/,UNAVAILABLE}
2019-09-02 10:57:14,204 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#2f2bf0e2{/logs,file:///usr/local/Cellar/hadoop/3.1.2/libexec/logs/,UNAVAILABLE}
2019-09-02 10:57:14,205 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2019-09-02 10:57:14,205 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2019-09-02 10:57:14,205 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2019-09-02 10:57:14,209 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: Could not parse line: Su Mo Tu We Th Fr Sa
at org.apache.hadoop.fs.DF.parseOutput(DF.java:195)
at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1166)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:788)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-09-02 10:57:14,210 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: Could not parse line: Su Mo Tu We Th Fr Sa
2019-09-02 10:57:14,212 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Maggies-MacBook-Pro.local/10.0.0.73
************************************************************/
Try to format namenode and again start all those nodes using start-all.sh
Will solve the issue I guess
Execute ssh localhost, may be it will fail connect.
If it is failed to connect, then add ssh keys to your known host.
Then start ./start-dfs.sh.
In my case I run a fortune program in my .bashrc and prints some messages.
Seem like that output would impact Hadoop script. My current version is 3.3.0.
Could not parse line: *** seems to parse the output of something including that program and throws error. So I have to remove this and it's gone.
In normal account.
I created some directories.
/usr/local/hadoop-2.7.3/data/dfs/namenode
/usr/local/hadoop-2.7.3/data/dfs/namesecondary
/usr/local/hadoop-2.7.3/data/dfs/datanode
/usr/local/hadoop-2.7.3/data/yarn/nm-local-dir
/usr/local/hadoop-2.7.3/data/yarn/system/rmstore
And typed some commands
bin/hdfs namenode –format
sudo sbin/start-all.sh
jps
Then
In the normal account, I could see only jps.
In the root account, I could see Jps, DataNode, SecondaryNameNode, NodeManager and ResourceManager.
I have 2 questions.
Why can I see only jps in normal account?
Why is namenode not started?
Thanks for reading.
And if you help me, I will appreciate you.
namenode log file
2017-04-06 01:16:15,217 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-04-06 01:16:15,220 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2017-04-06 01:16:15,680 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-04-06 01:16:15,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-04-06 01:16:15,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2017-04-06 01:16:15,845 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9010
2017-04-06 01:16:15,846 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9010 to access this namenode/service.
2017-04-06 01:16:16,070 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://localhost:50070
2017-04-06 01:16:16,152 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-04-06 01:16:16,158 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-04-06 01:16:16,165 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2017-04-06 01:16:16,169 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-04-06 01:16:16,300 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-04-06 01:16:16,303 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2017-04-06 01:16:16,330 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2017-04-06 01:16:16,330 INFO org.mortbay.log: jetty-6.1.26
2017-04-06 01:16:16,581 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:50070
2017-04-06 01:16:16,612 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,612 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,613 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-04-06 01:16:16,613 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-04-06 01:16:16,617 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,617 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2017-04-06 01:16:16,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2017-04-06 01:16:16,668 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2017-04-06 01:16:16,668 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2017-04-06 01:16:16,669 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2017-04-06 01:16:16,669 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2017 Apr 06 01:16:16
2017-04-06 01:16:16,670 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2017-04-06 01:16:16,670 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,671 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2017-04-06 01:16:16,671 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2017-04-06 01:16:16,690 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2017-04-06 01:16:16,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2017-04-06 01:16:16,708 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2017-04-06 01:16:16,963 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2017-04-06 01:16:16,963 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,970 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2017-04-06 01:16:16,970 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2017-04-06 01:16:16,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2017-04-06 01:16:16,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries
2017-04-06 01:16:17,005 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop-2.7.3/data/dfs/namenode/in_use.lock acquired by nodename 5360#localhost
2017-04-06 01:16:17,007 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
2017-04-06 01:16:17,032 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:50070
2017-04-06 01:16:17,035 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking.
2017-04-06 01:16:17,035 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2017-04-06 01:16:17,035 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
2017-04-06 01:16:17,036 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-06 01:16:17,040 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
Why can I see only jps in normal account?
As you have started the daemons with sudo, root user owns the processes. The command jps reports only the JVMs for which it has the access permissions. The normal account has no access for the processes owned by root.
Why is namenode not started?
java.io.IOException: NameNode is not formatted.
Namenode is not yet formatted. It is possible that you have missed to provide Y when the format command prompted for (Y/N).
Check whether properties are properly set are not in core-site.xml and hdfs-site.xml
Then run the following command
$ hdfs namenode -format
Not sure but check the ownership of the namenode folder.
It should be hadoop user or associate user which has authority to access this folder.
I have the same issue and solved by chnaging the ownership of the folder. Also assign full permission to the folder.
Hope this helps.
my namenode is not starting up.
Tried formatting and deleting tmp directory before attempting a restart. but it doesn't come up.
Currently i am attemting for a two node cluster. I cloned both nodes from a single node machine. And changed properties to resemble one for name node, job tracker and secondary name node. And the other for the rest.
On trying to start the name node. I am getting below exception in logs. Tried searching, but didn't find anything specific to my problem. Also i have set up password less ssh, in case any permissions are denied beacause of that.
2015-08-08 12:40:59,005 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = HNNAME/192.168.136.170
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.0.0-cdh4.7.0
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.7.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/cloudera-jets3t-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-test-hadoop2-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0-javadoc.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.7.0.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/lib/netty-3.2.4.Final.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.4.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.2.4.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.0-cdh4.7.0.jar
STARTUP_MSG: build = git://centos32-6-slave.sf.cloudera.com/data/1/jenkins/workspace/generic-package-centos32-6/topdir/BUILD/hadoop-2.0.0-cdh4.7.0/src/hadoop-common-project/hadoop-common -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on Wed May 28 10:12:25 PDT 2014
STARTUP_MSG: java = 1.6.0_45
************************************************************/
2015-08-08 12:40:59,010 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-08-08 12:40:59,576 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-08-08 12:40:59,718 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-08-08 12:40:59,718 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-08-08 12:41:00,059 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,060 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,061 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-08-08 12:41:00,061 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-08-08 12:41:00,069 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,069 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,101 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-08-08 12:41:00,165 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-08-08 12:41:00,180 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-08-08 12:41:00,187 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-08-08 12:41:00,187 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-08-08 12:41:00,196 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-08-08 12:41:00,196 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2015-08-08 12:41:01,099 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-08-08 12:41:01,110 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-08-08 12:41:01,110 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-08-08 12:41:01,111 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-08-08 12:41:01,111 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-08-08 12:41:01,115 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-08-08 12:41:01,547 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-08-08 12:41:01,549 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-08-08 12:41:01,549 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-08-08 12:41:01,549 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2015-08-08 12:41:01,562 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /storage/name/in_use.lock acquired by nodename 7800#HNNAME
2015-08-08 12:41:01,640 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /storage/name/current
2015-08-08 12:41:01,772 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loading image file /storage/name/current/fsimage_0000000000000038306 using no compression
2015-08-08 12:41:01,772 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of files = 4012
2015-08-08 12:41:01,932 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of files under construction = 1
2015-08-08 12:41:01,941 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 343797 loaded in 0 seconds.
2015-08-08 12:41:01,941 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 38306 from /storage/name/current/fsimage_0000000000000038306
2015-08-08 12:41:01,944 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#c623af expecting start txid #38307
2015-08-08 12:41:01,965 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/storage/name/current/edits_0000000000000038307-0000000000000038308' to transaction ID 38307
2015-08-08 12:41:01,985 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /storage/name/current/edits_0000000000000038307-0000000000000038308 of size 30 edits # 2 loaded in 0 seconds
2015-08-08 12:41:02,045 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 38309
2015-08-08 12:41:02,154 WARN org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Unable to start log segment 38309 at /storage/name/current/edits_inprogress_0000000000000038309: /storage/name/current/edits_inprogress_0000000000000038309 (Permission denied)
2015-08-08 12:41:02,154 ERROR org.apache.hadoop.hdfs.server.namenode.NNStorage: Error reported on storage directory Storage Directory /storage/name
2015-08-08 12:41:02,154 WARN org.apache.hadoop.hdfs.server.namenode.NNStorage: About to remove corresponding storage: /storage/name
2015-08-08 12:41:02,155 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 38309 failed for (journal JournalAndStream(mgr=FileJournalManager(root=/storage/name), stream=null))
java.io.FileNotFoundException: /storage/name/current/edits_inprogress_0000000000000038309 (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream.<init>(EditLogFileOutputStream.java:74)
at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.startLogSegment(FileJournalManager.java:105)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.startLogSegment(JournalSet.java:89)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:197)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:347)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:194)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:923)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:264)
at org.apache.hadoop.hdfs.server.namenode.FSImage.openEditLogForWrite(FSImage.java:574)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:747)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-08-08 12:41:02,156 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Disabling journal JournalAndStream(mgr=FileJournalManager(root=/storage/name), stream=null)
2015-08-08 12:41:02,156 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 38309 failed for too many journals
2015-08-08 12:41:02,157 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-08-08 12:41:02,157 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-08-08 12:41:02,158 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-08-08 12:41:02,158 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: Unable to start log segment 38309: too few journals successfully started.
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:925)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:264)
at org.apache.hadoop.hdfs.server.namenode.FSImage.openEditLogForWrite(FSImage.java:574)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:747)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
Caused by: java.io.IOException: starting log segment 38309 failed for too many journals
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:374)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:194)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:923)
... 10 more
2015-08-08 12:41:02,159 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-08-08 12:41:02,160 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at HNNAME/192.168.136.170
************************************************************/
I think you are not giving the permissions for the namenode meta-data location correctly. To make sure it works correctly, Verify your procedure following the steps below.
Assuming Namenode meta-data location: /storage/name
mkdir -p /storage/name
chown -R hdfs:hadoop /storage/name
sudo -u hdfs hadoop namenode -format
service hdfs-namenode start (assuming cdh rpm installation is used. This varies with the installation method you used)
Hadoop daemon will start as an hdfs user, and if the metadata location permissions are not set to hadoop user and hadoop superuser group, you get the error mentioned above.
If you observe the above log generated, FileSystem owner: fsOwner is hdfs and super group is supergroup. And the Exception is FileNotFound because the service responsible to start namenode is unable to access the filesystem, since it does not have required permissions.
I had the same problem in hortonworks after adding another disk to hdfs. I simply did chown hdfs:hadoop -R /hadoop/hdfs and it started working.
For my hadoop on Unbutu(WSL), it works by typing in:
cd /usr/local/hadoop # your directory path may be different
./sbin/stop-dfs.sh # stop
rm -r ./tmp # delete tmp folder
./bin/hdfs namenode -format # reformat NameNode
./sbin/start-dfs.sh # restart
Hope this can help u :)
Much like states when I run the command sudo service hadoop-hdfs-namenode start the command failed with the below message.
2015-02-01 16:51:22,032 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-02-01 16:51:22,379 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 16:51:23,096 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-02-01 16:51:23,214 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-02-01 16:51:23,223 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-02-01 16:51:23,232 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-02-01 16:51:23,233 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-02-01 16:51:23,253 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-02-01 16:51:23,259 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-02-01 16:51:23,555 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
2015-02-01 16:51:23,563 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name does not exist
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-02-01 16:51:23,566 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:302)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:207)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-02-01 16:51:23,571 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-02-01 16:51:23,573 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
************************************************************/
The error itself is pretty self explanatory, the directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is missing which is correct. The cache directory was empty so I created /cache/hdfs/dfs/name. I also changed the owner and group to match that of the directory above them. hdfs:hadoop.
I again run the format command sudo -u hdfs hdfs namenode –format which ends the same way as it did prior to creating this directory.
STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014
STARTUP_MSG: java = 1.7.0_75
************************************************************/
15/02/01 17:09:04 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force ] ]
15/02/01 17:09:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
Now I run the namenode start command again and receive the following error:
STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014
STARTUP_MSG: java = 1.7.0_75
************************************************************/
2015-02-01 17:09:26,774 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-02-01 17:09:27,097 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-02-01 17:09:27,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-02-01 17:09:27,216 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 17:09:27,779 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-02-01 17:09:27,883 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-02-01 17:09:27,890 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-02-01 17:09:27,910 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-02-01 17:09:27,924 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-02-01 17:09:28,178 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
2015-02-01 17:09:28,193 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/in_use.lock acquired by nodename 28482#hadoop
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-02-01 17:09:28,197 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:217)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-02-01 17:09:28,202 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-02-01 17:09:28,205 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
************************************************************/
My system is running in VirtualBox with a CentOS 6.6 guest, Oracle JDK 1.7, and attempting to run Cloudera CDH4. Any input on what to do next to resolve this issue would be appreciated.
If you copy and paste the format command from slides or something, can you actually type it and see if it works?
I don't know if you can see the different between
–format and -format.
The dash looks different to me.
Had the same issue formatting the namenode. Retyped the commando (not copy-paste):
hdfs namenode -format
it worked. Thx https://stackoverflow.com/users/4533812/james
I am new to Hadoop and facing the following issue while starting NameNode with ./hadoop-daemon.sh start namenode command.
Steps I followed:
1. Downloaded Ubuntu13 VM ans installed Java 1.6 and hadoop-2.2.0
2. updated the configuration files
3. ran this hadoop namenode –format
4. ran this from sbin dir ./hadoop-daemon.sh start namenode
Error is:
2014-01-04 06:55:48,561 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: 2.0% max memory = 888.9 MB
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2014-01-04 06:55:48,603 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2014-01-04 06:55:48,616 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = user (auth:SIMPLE)
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2014-01-04 06:55:48,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: 1.0% max memory = 888.9 MB
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2014-01-04 06:55:48,732 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: Computing capacity for map Namenode Retry Cache
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory = 888.9 MB
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries
2014-01-04 06:55:48,768 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/user/hadoop2_data/hdfs/namenode/in_use.lock acquired by nodename 12574#ubuntuvm
2014-01-04 06:55:48,785 INFO org.mortbay.log: Stopped SelectChannelConnector#0.0.0.0:50070
2014-01-04 06:55:48,789 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2014-01-04 06:55:48,793 **FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.**
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
2014-01-04 06:55:48,798 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2014-01-04 06:55:48,803 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntuvm/127.0.1.1
************************************************************/
Can someone help me to resolve this issue, I tried to google but still didn't find the solution.
Looks like your "hadoop namenode -format" didn't take (I assume you've tried hitting that command again and it still doesn't work). When you invoke hadoop namenode -format the user you are running as must have write access to the directories in dfs.data.dir and dfs.name.dir.
By default they are set to
${hadoop.tmp.dir}/dfs/data
and
${hadoop.tmp.dir}/dfs/name
where hadoop.tmp.dir is another config property that defaults to /tmp/hadoop-${username}.
So by default the hadoop data files are kept under your /tmp directory, which is not that great, especially if you have scripts that can clean out those directories.
Ensure that in core-site.xml you have set your dfs.data.dir and dfs.name.dir to directories that the user who is running hadoop admin commands and starting the hadoop daemons can write to. Then reformat HDFS and try again.