Shutting down NodeManager when it starts - hadoop

I have run hadoop on my laptop.When hadoop launching I execute command start-all.cmd. Then it starts with 4 daemon processes. 3 out of 4 processes cmd displays
SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-T7R9JV1/192.168.1.100
How I avoid this
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = DESKTOP-T7R9JV1/192.168.1.101
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.9.1
19/09/08 22:03:13 INFO namenode.NameNode: createNameNode []
19/09/08 22:03:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
19/09/08 22:03:14 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
19/09/08 22:03:14 INFO impl.MetricsSystemImpl: NameNode metrics system started
19/09/08 22:03:14 INFO namenode.NameNode: fs.defaultFS is hdfs://0.0.0.0:19000
19/09/08 22:03:14 INFO namenode.NameNode: Clients are to use 0.0.0.0:19000 to access this namenode/service.
19/09/08 22:03:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/09/08 22:03:15 INFO util.JvmPauseMonitor: Starting JVM pause monitor
19/09/08 22:03:15 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
19/09/08 22:03:15 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
19/09/08 22:03:15 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
19/09/08 22:03:15 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
19/09/08 22:03:15 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
19/09/08 22:03:15 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
19/09/08 22:03:16 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
19/09/08 22:03:16 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
19/09/08 22:03:16 INFO http.HttpServer2: Jetty bound to port 50070
19/09/08 22:03:16 INFO mortbay.log: jetty-6.1.26
19/09/08 22:03:16 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:50070
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1417)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:617)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1507)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1476)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkConfiguration(FSNamesystem.java:619)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
19/09/08 22:03:17 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceDirs(FSNamesystem.java:1417)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 ERROR common.Util: Syntax error in URI C:\BigData\hadoop-2.9.1\data\namenode. Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: C:\BigData\hadoop-2.9.1\data\namenode
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3058)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:49)
at org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:99)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:1462)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1507)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:1476)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:670)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 WARN common.Util: Path C:\BigData\hadoop-2.9.1\data\namenode should be specified as a URI in configuration files. Please update hdfs configuration.
19/09/08 22:03:17 INFO namenode.FSEditLog: Edit logging is async:true
19/09/08 22:03:17 INFO namenode.FSNamesystem: KeyProvider: null
19/09/08 22:03:17 INFO namenode.FSNamesystem: fsLock is fair: true
19/09/08 22:03:17 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19/09/08 22:03:17 INFO namenode.FSNamesystem: fsOwner = User (auth:SIMPLE)
19/09/08 22:03:17 INFO namenode.FSNamesystem: supergroup = supergroup
19/09/08 22:03:17 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/09/08 22:03:17 INFO namenode.FSNamesystem: HA Enabled: false
19/09/08 22:03:17 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19/09/08 22:03:17 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
19/09/08 22:03:17 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/09/08 22:03:17 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/09/08 22:03:17 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Sep 08 22:03:17
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map BlocksMap
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^22 = 4194304 entries
19/09/08 22:03:17 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/09/08 22:03:17 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
19/09/08 22:03:17 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
19/09/08 22:03:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
19/09/08 22:03:17 INFO blockmanagement.BlockManager: defaultReplication = 1
19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxReplication = 512
19/09/08 22:03:17 INFO blockmanagement.BlockManager: minReplication = 1
19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
19/09/08 22:03:17 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/09/08 22:03:17 INFO blockmanagement.BlockManager: encryptDataTransfer = false
19/09/08 22:03:17 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
19/09/08 22:03:17 INFO namenode.FSNamesystem: Append Enabled: true
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map INodeMap
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^21 = 2097152 entries
19/09/08 22:03:17 INFO namenode.FSDirectory: ACLs enabled? false
19/09/08 22:03:17 INFO namenode.FSDirectory: XAttrs enabled? true
19/09/08 22:03:17 INFO namenode.NameNode: Caching file names occurring more than 10 times
19/09/08 22:03:17 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map cachedBlocks
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^19 = 524288 entries
19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
19/09/08 22:03:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19/09/08 22:03:17 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/09/08 22:03:17 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/09/08 22:03:17 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/09/08 22:03:17 INFO util.GSet: VM type = 32-bit
19/09/08 22:03:17 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
19/09/08 22:03:17 INFO util.GSet: capacity = 2^16 = 65536 entries
19/09/08 22:03:17 ERROR namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:606)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:1006)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:558)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:518)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:370)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:226)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1048)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
19/09/08 22:03:17 INFO util.ExitUtil: Exiting with status 1: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
19/09/08 22:03:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-T7R9JV1/192.168.1.101
************************************************************/

Illegal character in opaque part at index 2
Index 2 is the backslash, which is not a valid URI character
In your config files, you need to use forward slashes and a protocol for the file URI
For example, change
C:\BigData\hadoop-2.9.1\data\namenode
to
file:/C:/BigData/hadoop-2.9.1/data/namenode

In my problem I have looked more solutions from internet.
My first problem was I have installed 32-bit Java version in my windows. There for my environtment variables JAVA_HOME path set as C:\Progra~2\Java\<JDK version>. Then I tried 64-bit java version by setting my JAVA_HOME as C:\Progra~1\Java\<JDK version>.
After setting 64-bit Java version as JAVA_HOME I have run start-all.cmd again. Then except namenode, all other deamons were worked. To run namenode I have followed these steps.
Open cmd as administrator.
Type and run stop-all.cmd
hadoop namenode –format
start-all.cmd
It was solved my problem 100% and perfectly worked for me.

You have to formate your namenode by hadoop namenode –format Then restart the server

Related

HADOOP 3.1.2 Namenode not starting

I am new to Hadoop, so I would really appreciate any feedback on this issue.
The Hadoop setup seems fine. I am able to start it, but when I checked the web UI at: http://localhost:50070 or http://localhost:9870 it shows the site can't be reached. Similarly, to check Yarn with the web UI http://localhost:8088, I had the same problem.
command jps shows the following details:
50714 SecondaryNameNode
88442
51756 Jps
50589 DataNode
Namenode, ResourceManager, NodeManager are missing.
I have tried changing the port configuration, didn't help.
Reference: http://localhost:50070 does not work HADOOP
hadoop web UI at http://localhost:50070/ doesnt work
$ ./start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [Maggies-MacBook-Pro.local]
2019-09-01 17:33:33,523 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sbin ./start-yarn.sh
Starting resourcemanager
Starting nodemanagers
After reformating namenode and start-all.sh:
sbin ./start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as zxiao in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [Maggies-MacBook-Pro.local]
2019-09-02 09:19:31,657 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
sbin jps
98359 SecondaryNameNode
99014 Jps
98232 DataNode
88442
Still cannot get the namenode started. The web UI still won't show up.
Update Here is the log file for the namenode:
2019-09-02 10:57:12,784 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-09-02 10:57:12,850 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2019-09-02 10:57:12,965 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2019-09-02 10:57:13,089 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2019-09-02 10:57:13,090 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2019-09-02 10:57:13,112 INFO org.apache.hadoop.hdfs.server.namenode.NameNodeUtils: fs.defaultFS is hdfs://localhost:8020
2019-09-02 10:57:13,112 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients should use localhost:8020 to access this namenode/service.
2019-09-02 10:57:13,134 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-09-02 10:57:13,209 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2019-09-02 10:57:13,226 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
2019-09-02 10:57:13,235 INFO org.eclipse.jetty.util.log: Logging initialized #839ms
2019-09-02 10:57:13,294 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-09-02 10:57:13,302 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2019-09-02 10:57:13,306 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-09-02 10:57:13,307 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2019-09-02 10:57:13,307 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-09-02 10:57:13,307 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-09-02 10:57:13,320 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-09-02 10:57:13,320 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-09-02 10:57:13,333 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 9870
2019-09-02 10:57:13,333 INFO org.eclipse.jetty.server.Server: jetty-9.3.24.v20180605, build timestamp: 2018-06-05T10:11:56-07:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-09-02 10:57:13,350 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler#2f2bf0e2{/logs,file:///usr/local/Cellar/hadoop/3.1.2/libexec/logs/,AVAILABLE}
2019-09-02 10:57:13,351 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler#21ec5d87{/static,file:///usr/local/Cellar/hadoop/3.1.2/libexec/share/hadoop/hdfs/webapps/static/,AVAILABLE}
2019-09-02 10:57:13,404 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext#4fdf8f12{/,file:///usr/local/Cellar/hadoop/3.1.2/libexec/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2019-09-02 10:57:13,409 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector#5710768a{HTTP/1.1,[http/1.1]}{0.0.0.0:9870}
2019-09-02 10:57:13,409 INFO org.eclipse.jetty.server.Server: Started #1013ms
2019-09-02 10:57:13,532 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-09-02 10:57:13,532 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-09-02 10:57:13,559 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true
2019-09-02 10:57:13,567 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null
2019-09-02 10:57:13,568 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2019-09-02 10:57:13,569 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2019-09-02 10:57:13,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = zxiao (auth:SIMPLE)
2019-09-02 10:57:13,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2019-09-02 10:57:13,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2019-09-02 10:57:13,593 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2019-09-02 10:57:13,622 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-09-02 10:57:13,630 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-09-02 10:57:13,630 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2019-09-02 10:57:13,634 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2019-09-02 10:57:13,634 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2019 Sep 02 10:57:13
2019-09-02 10:57:13,635 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2019-09-02 10:57:13,635 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,636 INFO org.apache.hadoop.util.GSet: 2.0% max memory 4 GB = 81.9 MB
2019-09-02 10:57:13,636 INFO org.apache.hadoop.util.GSet: capacity = 2^23 = 8388608 entries
2019-09-02 10:57:13,657 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable = false
2019-09-02 10:57:13,662 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2019-09-02 10:57:13,662 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2019-09-02 10:57:13,678 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2019-09-02 10:57:13,688 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2019-09-02 10:57:13,688 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,689 INFO org.apache.hadoop.util.GSet: 1.0% max memory 4 GB = 41.0 MB
2019-09-02 10:57:13,689 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2019-09-02 10:57:13,697 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10 times
2019-09-02 10:57:13,702 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-09-02 10:57:13,703 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: SkipList is disabled
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: 0.25% max memory 4 GB = 10.2 MB
2019-09-02 10:57:13,706 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2019-09-02 10:57:13,712 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-09-02 10:57:13,712 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2019-09-02 10:57:13,712 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-09-02 10:57:13,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2019-09-02 10:57:13,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 4 GB = 1.2 MB
2019-09-02 10:57:13,715 INFO org.apache.hadoop.util.GSet: capacity = 2^17 = 131072 entries
2019-09-02 10:57:13,727 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/in_use.lock acquired by nodename 25057#Maggies-MacBook-Pro.local
2019-09-02 10:57:13,743 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current
2019-09-02 10:57:13,748 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2019-09-02 10:57:13,792 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2019-09-02 10:57:13,809 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2019-09-02 10:57:13,810 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/fsimage_0000000000000000000
2019-09-02 10:57:13,812 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#5c748168 expecting start txid #1
2019-09-02 10:57:13,813 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000001-0000000000000000002 maxTxnsToRead = 9223372036854775807
2019-09-02 10:57:13,815 INFO org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream: Fast-forwarding stream '/usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000001-0000000000000000002' to transaction ID 1
2019-09-02 10:57:13,826 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000001-0000000000000000002 of size 42 edits # 2 loaded in 0 seconds
2019-09-02 10:57:13,826 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2019-09-02 10:57:13,826 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2019-09-02 10:57:13,910 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2019-09-02 10:57:13,911 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 193 msecs
2019-09-02 10:57:14,012 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to localhost:8020
2019-09-02 10:57:14,017 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-09-02 10:57:14,023 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020
2019-09-02 10:57:14,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
2019-09-02 10:57:14,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2019-09-02 10:57:14,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3, 3
2019-09-02 10:57:14,175 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 2 Number of syncs: 3 SyncTimes(ms): 21
2019-09-02 10:57:14,177 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_inprogress_0000000000000000003 -> /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name/current/edits_0000000000000000003-0000000000000000004
2019-09-02 10:57:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: FSEditLogAsync was interrupted, exiting
2019-09-02 10:57:14,178 INFO org.apache.hadoop.ipc.Server: Stopping server on 8020
2019-09-02 10:57:14,198 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2019-09-02 10:57:14,198 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2019-09-02 10:57:14,201 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext#4fdf8f12{/,null,UNAVAILABLE}{/hdfs}
2019-09-02 10:57:14,204 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector#5710768a{HTTP/1.1,[http/1.1]}{0.0.0.0:9870}
2019-09-02 10:57:14,204 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#21ec5d87{/static,file:///usr/local/Cellar/hadoop/3.1.2/libexec/share/hadoop/hdfs/webapps/static/,UNAVAILABLE}
2019-09-02 10:57:14,204 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler#2f2bf0e2{/logs,file:///usr/local/Cellar/hadoop/3.1.2/libexec/logs/,UNAVAILABLE}
2019-09-02 10:57:14,205 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2019-09-02 10:57:14,205 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2019-09-02 10:57:14,205 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2019-09-02 10:57:14,209 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: Could not parse line: Su Mo Tu We Th Fr Sa
at org.apache.hadoop.fs.DF.parseOutput(DF.java:195)
at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1166)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:788)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-09-02 10:57:14,210 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: Could not parse line: Su Mo Tu We Th Fr Sa
2019-09-02 10:57:14,212 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Maggies-MacBook-Pro.local/10.0.0.73
************************************************************/
Try to format namenode and again start all those nodes using start-all.sh
Will solve the issue I guess
Execute ssh localhost, may be it will fail connect.
If it is failed to connect, then add ssh keys to your known host.
Then start ./start-dfs.sh.
In my case I run a fortune program in my .bashrc and prints some messages.
Seem like that output would impact Hadoop script. My current version is 3.3.0.
Could not parse line: *** seems to parse the output of something including that program and throws error. So I have to remove this and it's gone.

namenode is not formatted in hadoop

In normal account.
I created some directories.
/usr/local/hadoop-2.7.3/data/dfs/namenode
/usr/local/hadoop-2.7.3/data/dfs/namesecondary
/usr/local/hadoop-2.7.3/data/dfs/datanode
/usr/local/hadoop-2.7.3/data/yarn/nm-local-dir
/usr/local/hadoop-2.7.3/data/yarn/system/rmstore
And typed some commands
bin/hdfs namenode –format
sudo sbin/start-all.sh
jps
Then
In the normal account, I could see only jps.
In the root account, I could see Jps, DataNode, SecondaryNameNode, NodeManager and ResourceManager.
I have 2 questions.
Why can I see only jps in normal account?
Why is namenode not started?
Thanks for reading.
And if you help me, I will appreciate you.
namenode log file
2017-04-06 01:16:15,217 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-04-06 01:16:15,220 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2017-04-06 01:16:15,680 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-04-06 01:16:15,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-04-06 01:16:15,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2017-04-06 01:16:15,845 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9010
2017-04-06 01:16:15,846 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9010 to access this namenode/service.
2017-04-06 01:16:16,070 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://localhost:50070
2017-04-06 01:16:16,152 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-04-06 01:16:16,158 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-04-06 01:16:16,165 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2017-04-06 01:16:16,169 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-04-06 01:16:16,300 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-04-06 01:16:16,303 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2017-04-06 01:16:16,330 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2017-04-06 01:16:16,330 INFO org.mortbay.log: jetty-6.1.26
2017-04-06 01:16:16,581 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:50070
2017-04-06 01:16:16,612 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,612 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,613 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-04-06 01:16:16,613 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-04-06 01:16:16,617 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,617 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-04-06 01:16:16,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2017-04-06 01:16:16,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2017-04-06 01:16:16,668 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2017-04-06 01:16:16,668 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2017-04-06 01:16:16,669 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2017-04-06 01:16:16,669 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2017 Apr 06 01:16:16
2017-04-06 01:16:16,670 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2017-04-06 01:16:16,670 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,671 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2017-04-06 01:16:16,671 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2017-04-06 01:16:16,690 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2017-04-06 01:16:16,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2017-04-06 01:16:16,708 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2017-04-06 01:16:16,963 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2017-04-06 01:16:16,963 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,970 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2017-04-06 01:16:16,970 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2017-04-06 01:16:16,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2017-04-06 01:16:16,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries
2017-04-06 01:16:17,005 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop-2.7.3/data/dfs/namenode/in_use.lock acquired by nodename 5360#localhost
2017-04-06 01:16:17,007 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
2017-04-06 01:16:17,032 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:50070
2017-04-06 01:16:17,035 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking.
2017-04-06 01:16:17,035 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2017-04-06 01:16:17,035 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
2017-04-06 01:16:17,036 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-06 01:16:17,040 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
Why can I see only jps in normal account?
As you have started the daemons with sudo, root user owns the processes. The command jps reports only the JVMs for which it has the access permissions. The normal account has no access for the processes owned by root.
Why is namenode not started?
java.io.IOException: NameNode is not formatted.
Namenode is not yet formatted. It is possible that you have missed to provide Y when the format command prompted for (Y/N).
Check whether properties are properly set are not in core-site.xml and hdfs-site.xml
Then run the following command
$ hdfs namenode -format
Not sure but check the ownership of the namenode folder.
It should be hadoop user or associate user which has authority to access this folder.
I have the same issue and solved by chnaging the ownership of the folder. Also assign full permission to the folder.
Hope this helps.

Namenode not starting - Exception in namenode join

my namenode is not starting up.
Tried formatting and deleting tmp directory before attempting a restart. but it doesn't come up.
Currently i am attemting for a two node cluster. I cloned both nodes from a single node machine. And changed properties to resemble one for name node, job tracker and secondary name node. And the other for the rest.
On trying to start the name node. I am getting below exception in logs. Tried searching, but didn't find anything specific to my problem. Also i have set up password less ssh, in case any permissions are denied beacause of that.
2015-08-08 12:40:59,005 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = HNNAME/192.168.136.170
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.0.0-cdh4.7.0
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.7.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/cloudera-jets3t-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-generator-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-avro-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-cascading-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-thrift-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-pig-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-hive-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-common-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//parquet-test-hadoop2-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-encoding-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//parquet-hadoop-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-column-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-scrooge-1.2.5-cdh4.7.0-javadoc.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0-sources.jar:/usr/lib/hadoop/.//parquet-pig-bundle-1.2.5-cdh4.7.0.jar:/usr/lib/hadoop/.//parquet-format-1.0.0-cdh4.7.0-javadoc.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.7.0.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/lib/netty-3.2.4.Final.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.4.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.2.4.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.0-cdh4.7.0-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.0-cdh4.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.0-cdh4.7.0.jar
STARTUP_MSG: build = git://centos32-6-slave.sf.cloudera.com/data/1/jenkins/workspace/generic-package-centos32-6/topdir/BUILD/hadoop-2.0.0-cdh4.7.0/src/hadoop-common-project/hadoop-common -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on Wed May 28 10:12:25 PDT 2014
STARTUP_MSG: java = 1.6.0_45
************************************************************/
2015-08-08 12:40:59,010 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-08-08 12:40:59,576 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-08-08 12:40:59,718 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-08-08 12:40:59,718 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-08-08 12:41:00,059 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,060 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,061 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-08-08 12:41:00,061 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-08-08 12:41:00,069 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,069 WARN org.apache.hadoop.hdfs.server.common.Util: Path /storage/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-08-08 12:41:00,101 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-08-08 12:41:00,165 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-08-08 12:41:00,180 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-08-08 12:41:00,187 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-08-08 12:41:00,187 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-08-08 12:41:00,196 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-08-08 12:41:00,196 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2015-08-08 12:41:01,099 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-08-08 12:41:01,100 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-08-08 12:41:01,110 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-08-08 12:41:01,110 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-08-08 12:41:01,111 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-08-08 12:41:01,111 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-08-08 12:41:01,115 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-08-08 12:41:01,547 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-08-08 12:41:01,549 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-08-08 12:41:01,549 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-08-08 12:41:01,549 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2015-08-08 12:41:01,562 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /storage/name/in_use.lock acquired by nodename 7800#HNNAME
2015-08-08 12:41:01,640 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /storage/name/current
2015-08-08 12:41:01,772 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loading image file /storage/name/current/fsimage_0000000000000038306 using no compression
2015-08-08 12:41:01,772 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of files = 4012
2015-08-08 12:41:01,932 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of files under construction = 1
2015-08-08 12:41:01,941 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 343797 loaded in 0 seconds.
2015-08-08 12:41:01,941 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 38306 from /storage/name/current/fsimage_0000000000000038306
2015-08-08 12:41:01,944 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#c623af expecting start txid #38307
2015-08-08 12:41:01,965 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/storage/name/current/edits_0000000000000038307-0000000000000038308' to transaction ID 38307
2015-08-08 12:41:01,985 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /storage/name/current/edits_0000000000000038307-0000000000000038308 of size 30 edits # 2 loaded in 0 seconds
2015-08-08 12:41:02,045 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 38309
2015-08-08 12:41:02,154 WARN org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Unable to start log segment 38309 at /storage/name/current/edits_inprogress_0000000000000038309: /storage/name/current/edits_inprogress_0000000000000038309 (Permission denied)
2015-08-08 12:41:02,154 ERROR org.apache.hadoop.hdfs.server.namenode.NNStorage: Error reported on storage directory Storage Directory /storage/name
2015-08-08 12:41:02,154 WARN org.apache.hadoop.hdfs.server.namenode.NNStorage: About to remove corresponding storage: /storage/name
2015-08-08 12:41:02,155 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 38309 failed for (journal JournalAndStream(mgr=FileJournalManager(root=/storage/name), stream=null))
java.io.FileNotFoundException: /storage/name/current/edits_inprogress_0000000000000038309 (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream.<init>(EditLogFileOutputStream.java:74)
at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.startLogSegment(FileJournalManager.java:105)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.startLogSegment(JournalSet.java:89)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$2.apply(JournalSet.java:197)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:347)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:194)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:923)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:264)
at org.apache.hadoop.hdfs.server.namenode.FSImage.openEditLogForWrite(FSImage.java:574)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:747)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-08-08 12:41:02,156 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Disabling journal JournalAndStream(mgr=FileJournalManager(root=/storage/name), stream=null)
2015-08-08 12:41:02,156 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 38309 failed for too many journals
2015-08-08 12:41:02,157 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-08-08 12:41:02,157 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-08-08 12:41:02,158 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-08-08 12:41:02,158 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: Unable to start log segment 38309: too few journals successfully started.
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:925)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:264)
at org.apache.hadoop.hdfs.server.namenode.FSImage.openEditLogForWrite(FSImage.java:574)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:747)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
Caused by: java.io.IOException: starting log segment 38309 failed for too many journals
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:374)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:194)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:923)
... 10 more
2015-08-08 12:41:02,159 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-08-08 12:41:02,160 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at HNNAME/192.168.136.170
************************************************************/
I think you are not giving the permissions for the namenode meta-data location correctly. To make sure it works correctly, Verify your procedure following the steps below.
Assuming Namenode meta-data location: /storage/name
mkdir -p /storage/name
chown -R hdfs:hadoop /storage/name
sudo -u hdfs hadoop namenode -format
service hdfs-namenode start (assuming cdh rpm installation is used. This varies with the installation method you used)
Hadoop daemon will start as an hdfs user, and if the metadata location permissions are not set to hadoop user and hadoop superuser group, you get the error mentioned above.
If you observe the above log generated, FileSystem owner: fsOwner is hdfs and super group is supergroup. And the Exception is FileNotFound because the service responsible to start namenode is unable to access the filesystem, since it does not have required permissions.
I had the same problem in hortonworks after adding another disk to hdfs. I simply did chown hdfs:hadoop -R /hadoop/hdfs and it started working.
For my hadoop on Unbutu(WSL), it works by typing in:
cd /usr/local/hadoop # your directory path may be different
./sbin/stop-dfs.sh # stop
rm -r ./tmp # delete tmp folder
./bin/hdfs namenode -format # reformat NameNode
./sbin/start-dfs.sh # restart
Hope this can help u :)

Hadoop hdfs namenode start command fails. Not formatted either?

Much like states when I run the command sudo service hadoop-hdfs-namenode start the command failed with the below message.
2015-02-01 16:51:22,032 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-02-01 16:51:22,379 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 16:51:23,096 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-02-01 16:51:23,214 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-02-01 16:51:23,223 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-02-01 16:51:23,232 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-02-01 16:51:23,233 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-02-01 16:51:23,253 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-02-01 16:51:23,259 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-02-01 16:51:23,555 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
2015-02-01 16:51:23,563 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name does not exist
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-02-01 16:51:23,566 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:302)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:207)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-02-01 16:51:23,571 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-02-01 16:51:23,573 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
************************************************************/
The error itself is pretty self explanatory, the directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is missing which is correct. The cache directory was empty so I created /cache/hdfs/dfs/name. I also changed the owner and group to match that of the directory above them. hdfs:hadoop.
I again run the format command sudo -u hdfs hdfs namenode –format which ends the same way as it did prior to creating this directory.
STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014
STARTUP_MSG: java = 1.7.0_75
************************************************************/
15/02/01 17:09:04 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force ] ]
15/02/01 17:09:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
Now I run the namenode start command again and receive the following error:
STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014
STARTUP_MSG: java = 1.7.0_75
************************************************************/
2015-02-01 17:09:26,774 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-02-01 17:09:27,097 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-02-01 17:09:27,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-02-01 17:09:27,216 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!
2015-02-01 17:09:27,779 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-02-01 17:09:27,883 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2015-02-01 17:09:27,890 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-02-01 17:09:27,910 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-02-01 17:09:27,924 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-02-01 17:09:28,178 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 0
2015-02-01 17:09:28,193 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/in_use.lock acquired by nodename 28482#hadoop
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-02-01 17:09:28,197 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:217)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
2015-02-01 17:09:28,202 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-02-01 17:09:28,205 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1
************************************************************/
My system is running in VirtualBox with a CentOS 6.6 guest, Oracle JDK 1.7, and attempting to run Cloudera CDH4. Any input on what to do next to resolve this issue would be appreciated.
If you copy and paste the format command from slides or something, can you actually type it and see if it works?
I don't know if you can see the different between
–format and -format.
The dash looks different to me.
Had the same issue formatting the namenode. Retyped the commando (not copy-paste):
hdfs namenode -format
it worked. Thx https://stackoverflow.com/users/4533812/james

Hadoop-2.2.0 NammeNode startup issue

I am new to Hadoop and facing the following issue while starting NameNode with ./hadoop-daemon.sh start namenode command.
Steps I followed:
1. Downloaded Ubuntu13 VM ans installed Java 1.6 and hadoop-2.2.0
2. updated the configuration files
3. ran this hadoop namenode –format
4. ran this from sbin dir ./hadoop-daemon.sh start namenode
Error is:
2014-01-04 06:55:48,561 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: 2.0% max memory = 888.9 MB
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2014-01-04 06:55:48,603 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2014-01-04 06:55:48,616 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = user (auth:SIMPLE)
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2014-01-04 06:55:48,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: 1.0% max memory = 888.9 MB
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2014-01-04 06:55:48,732 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: Computing capacity for map Namenode Retry Cache
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory = 888.9 MB
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries
2014-01-04 06:55:48,768 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/user/hadoop2_data/hdfs/namenode/in_use.lock acquired by nodename 12574#ubuntuvm
2014-01-04 06:55:48,785 INFO org.mortbay.log: Stopped SelectChannelConnector#0.0.0.0:50070
2014-01-04 06:55:48,789 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2014-01-04 06:55:48,793 **FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.**
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
2014-01-04 06:55:48,798 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2014-01-04 06:55:48,803 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntuvm/127.0.1.1
************************************************************/
Can someone help me to resolve this issue, I tried to google but still didn't find the solution.
Looks like your "hadoop namenode -format" didn't take (I assume you've tried hitting that command again and it still doesn't work). When you invoke hadoop namenode -format the user you are running as must have write access to the directories in dfs.data.dir and dfs.name.dir.
By default they are set to
${hadoop.tmp.dir}/dfs/data
and
${hadoop.tmp.dir}/dfs/name
where hadoop.tmp.dir is another config property that defaults to /tmp/hadoop-${username}.
So by default the hadoop data files are kept under your /tmp directory, which is not that great, especially if you have scripts that can clean out those directories.
Ensure that in core-site.xml you have set your dfs.data.dir and dfs.name.dir to directories that the user who is running hadoop admin commands and starting the hadoop daemons can write to. Then reformat HDFS and try again.

Resources