Datanode starts but not namenode - hadoop

After a bit of struggling I had eventually managed to use hadoop in pseudo-distributed node, with a namenode and a jobtracker working perfectly (at http://localhost:50070 and http://localhost:50030)
Yesterday I tried to restart my namenode, datanode, etc with:
$hadoop namenode -format
$start-all.sh
And jps gives me the following output:
17148 DataNode
17295 SecondaryNameNode
17419 JobTracker
17669 Jps
Namenode doesn't seem to be willing to start anymore ... And Jobtracker dies a few seconds later.
Mark that I hadn't restarted my computer and that I've tried the solution given in the following thread Namenode not getting started but it didn't help.
Here is the log of the namenode, with a bunch of errors. I don't know how to solve my issue at all
2013-08-16 09:02:21,647 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.lan/192.168.1.94
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_25
************************************************************/
2013-08-16 09:02:21,839 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-08-16 09:02:21,868 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-08-16 09:02:21,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-08-16 09:02:21,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-08-16 09:02:22,098 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-08-16 09:02:22,103 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-08-16 09:02:22,110 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-08-16 09:02:22,111 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932118528
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=rlk
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-08-16 09:02:22,189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-08-16 09:02:22,189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-08-16 09:02:22,271 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-08-16 09:02:22,320 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-08-16 09:02:22,321 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-08-16 09:02:22,363 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /home/rlk/hduser/dfs/name/current/fsimage
2013-08-16 09:02:22,364 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-08-16 09:02:22,372 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-08-16 09:02:22,375 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/rlk/hduser/dfs/name/current/fsimage of size 109 bytes loaded in 0 seconds.
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /home/rlk/hduser/dfs/name/current/edits, reached end of edit log Number of transactions found: 0. Bytes read: 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/home/rlk/hduser/dfs/name/current/edits) ...
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/home/rlk/hduser/dfs/name/current/edits):
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Padding position = -1 (-1 means padding not found)
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit log length = 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Read length = 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Corruption length = 0
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2013-08-16 09:02:22,382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2013-08-16 09:02:22,382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /home/rlk/hduser/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-08-16 09:02:22,387 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/rlk/hduser/dfs/name/current/fsimage of size 109 bytes saved in 0 seconds.
2013-08-16 09:02:22,553 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,553 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,933 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-08-16 09:02:22,933 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 776 msecs
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct = 0.9990000128746033
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension = 30000
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 0 and thus the safe blocks: 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 21 msec
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-08-16 09:02:22,962 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-08-16 09:02:22,972 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-08-16 09:02:22,983 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-08-16 09:02:23,026 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-08-16 09:02:23,029 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort8020 registered.
2013-08-16 09:02:23,030 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort8020 registered.
2013-08-16 09:02:23,037 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost.localdomain/127.0.0.1:8020
2013-08-16 09:02:23,195 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-08-16 09:02:23,306 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-08-16 09:02:23,318 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-08-16 09:02:23,329 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-08-16 09:02:23,331 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-08-16 09:02:23,331 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-08-16 09:02:23,331 INFO org.mortbay.log: jetty-6.1.26
2013-08-16 09:02:23,386 INFO org.mortbay.log: Extract jar:file:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/jre/lib/ext/hadoop-core-1.2.1.jar!/webapps/hdfs to /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08/webapp
2013-08-16 09:02:25,171 WARN org.mortbay.log: failed jsp: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,215 WARN org.mortbay.log: failed org.mortbay.jetty.webapp.WebAppContext#12305d34{/,jar:file:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/jre/lib/ext/hadoop-core-1.2.1.jar!/webapps/hdfs}: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,225 WARN org.mortbay.log: failed ContextHandlerCollection#25370a40: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,226 ERROR org.mortbay.log: Error starting handlers
java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
at org.apache.jasper.servlet.JspServlet.init(JspServlet.java:99)
at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:440)
at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:736)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:638)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
Caused by: java.lang.ClassNotFoundException: javax.servlet.jsp.JspFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 27 more
2013-08-16 09:02:25,307 INFO org.mortbay.log: Started SelectChannelConnector#0.0.0.0:50070
2013-08-16 09:02:25,307 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:rlk cause:java.io.IOException: Problem in starting http server. Server handlers failed
2013-08-16 09:02:25,308 INFO org.mortbay.log: Stopped SelectChannelConnector#0.0.0.0:50070
2013-08-16 09:02:25,308 ERROR org.mortbay.log: EXCEPTION
java.lang.NullPointerException
at org.apache.jasper.servlet.JspServlet.destroy(JspServlet.java:282)
at org.mortbay.jetty.servlet.ServletHolder.destroyInstance(ServletHolder.java:318)
at org.mortbay.jetty.servlet.ServletHolder.doStop(ServletHolder.java:289)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.servlet.ServletHandler.doStop(ServletHandler.java:185)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.jetty.servlet.SessionHandler.doStop(SessionHandler.java:125)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.jetty.handler.ContextHandler.doStop(ContextHandler.java:592)
at org.mortbay.jetty.webapp.WebAppContext.doStop(WebAppContext.java:537)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerCollection.doStop(HandlerCollection.java:169)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
at org.mortbay.jetty.Server.doStop(Server.java:283)
at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
at org.apache.hadoop.http.HttpServer.stop(HttpServer.java:688)
at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:604)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-08-16 09:02:25,336 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
2013-08-16 09:02:25,337 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
at java.lang.Thread.run(Thread.java:724)
2013-08-16 09:02:25,339 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2013-08-16 09:02:25,375 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:25,375 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:25,403 INFO org.apache.hadoop.ipc.Server: Stopping server on 8020
2013-08-16 09:02:25,411 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2013-08-16 09:02:25,412 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Problem in starting http server. Server handlers failed
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:662)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-08-16 09:02:25,413 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.lan/192.168.1.94
************************************************************/
I also give you my hadoop configuration (I'm using hadoop-1.2.1) :
core-site.xml :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- core-site.xml -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/rlk/hduser</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost/</value>
</property>
</configuration>
hdfs-site.xml :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- hdfs-site.xml -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- mapred-site.xml -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
</configuration>

I found the solution : it was some jar collisions. I had duplicate jar files both in hadoop-x.y.z/ and hadoop-x.y.z/lib and in path-to-java/jre/lib/ext/.
I just removed the formers and everything works fine again.

you did not mention port number for master node in core-site.xml.
<property>
<name>fs.default.name</name>
<value>hdfs://master:port</value>
</property>

this problem in core-site.xml please set proper
<property>
<name>hadoop.tmp.dir</name>
<value>/home/rlk/hduser</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost/90000</value>
</property>

Related

How resolve hadoop installation error: hdfs namenode -format

I installed hadoop on centos7. when i execute the command: hdfs namenode -format
I have an output with errors I tried several proposals that I saw on the internet but the problem is not solved.
21/06/16 14:15:01 INFO namenode.NameNode: Caching file names occuring more than 10 times
21/06/16 14:15:01 INFO util.GSet: Computing capacity for map cachedBlocks
21/06/16 14:15:01 INFO util.GSet: VM type = 64-bit
21/06/16 14:15:01 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
21/06/16 14:15:01 INFO util.GSet: capacity = 2^18 = 262144 entries
21/06/16 14:15:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
21/06/16 14:15:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
21/06/16 14:15:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
21/06/16 14:15:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
21/06/16 14:15:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
21/06/16 14:15:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
21/06/16 14:15:01 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
21/06/16 14:15:01 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
21/06/16 14:15:01 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/06/16 14:15:01 INFO util.GSet: VM type = 64-bit
21/06/16 14:15:01 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
21/06/16 14:15:01 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /storage/name ? (Y or N) y
21/06/16 14:15:06 WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
java.net.UnknownHostException: LSHDP
localhost: LSHDP
localhost: Temporary failure in name resolution
at java.net.InetAddress.getLocalHost(InetAddress.java:1506)
at org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:264)
at org.apache.hadoop.net.DNS.<clinit>(DNS.java:57)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:157)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.UnknownHostException: LSHDP
localhost: Temporary failure in name resolution
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getLocalHost(InetAddress.java:1501)
... 8 more
21/06/16 14:15:06 WARN net.DNS: Unable to determine address of the host-falling back to "localhost" address
java.net.UnknownHostException: LSHDP
localhost: LSHDP
localhost: Temporary failure in name resolution
at java.net.InetAddress.getLocalHost(InetAddress.java:1506)
at org.apache.hadoop.net.DNS.resolveLocalHostIPAddress(DNS.java:287)
at org.apache.hadoop.net.DNS.<clinit>(DNS.java:58)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:157)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.UnknownHostException: LSHDP
localhost: Temporary failure in name resolution
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getLocalHost(InetAddress.java:1501)
... 8 more
21/06/16 14:15:06 INFO namenode.FSImage: Allocated new BlockPoolId: BP-352354458-127.0.0.1-1623852906859
21/06/16 14:15:06 INFO common.Storage: Storage directory /storage/name has been successfully formatted.
21/06/16 14:15:07 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/06/16 14:15:07 INFO util.ExitUtil: Exiting with status 0
21/06/16 14:15:07 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: LSHDP
localhost: LSHDP
localhost: Temporary failure in name resolution
************************************************************/
You need to fix your DNS server (or OS hosts file) such that a host named LSHDP is known.
For example, ping LSHDP should return you a similar error
Or you need to edit your Hadoop config files to use IP addresses rather than hostnames

hadoop namenode and datanote not started

Last Edit
I fixed it by mixing many different answers together.
First I changed the rights of:
/usr/local/hadoop_store/hdfs/namenode
/usr/local/hadoop_store/hdfs/datanode
to 777.
Then I ran stop-all.sh and restarted hadoop.
Should this question be closed?
I know this has been used before, but the questioneers seem to work with much older versions. also, none of the answers helped me.
I installed hadoop 2.7.0 on Ubuntu 15.10 and followed the following tutorial exactly:
https://www.digitalocean.com/community/tutorials/how-to-install-hadoop-on-ubuntu-13-10
I tried about 20 others, this was the first that was understandable.
now, when I run jps, I get:
14812 SecondaryNameNode
15101 NodeManager
14969 ResourceManager
15519 Jps
Which means the NameNode and the DataNode have not started.
Does anyone know how to fix this?
Edit:
I think this might be important: When I formatted my namenode using
hdfs namenode -format
I got one hell of an output:
> Blockquote STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = me-Aspire-E5-574G/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.2
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.2.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.2-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.2.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.2-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.2.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.2.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z
STARTUP_MSG: java = 1.7.0_101
************************************************************/
16/06/16 10:18:13 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/06/16 10:18:13 INFO namenode.NameNode: createNameNode [-format]
16/06/16 10:18:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-19779c07-66da-44f2-b05c-6664e2a2abfc
16/06/16 10:18:14 INFO namenode.FSNamesystem: No KeyProvider found.
16/06/16 10:18:14 INFO namenode.FSNamesystem: fsLock is fair:true
16/06/16 10:18:15 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/06/16 10:18:15 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/06/16 10:18:15 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/06/16 10:18:15 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Jun 16 10:18:15
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map BlocksMap
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/06/16 10:18:15 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/06/16 10:18:15 INFO blockmanagement.BlockManager: defaultReplication = 1
16/06/16 10:18:15 INFO blockmanagement.BlockManager: maxReplication = 512
16/06/16 10:18:15 INFO blockmanagement.BlockManager: minReplication = 1
16/06/16 10:18:15 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
16/06/16 10:18:15 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/06/16 10:18:15 INFO blockmanagement.BlockManager: encryptDataTransfer = false
16/06/16 10:18:15 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
16/06/16 10:18:15 INFO namenode.FSNamesystem: fsOwner =me (auth:SIMPLE)
16/06/16 10:18:15 INFO namenode.FSNamesystem: supergroup = supergroup
16/06/16 10:18:15 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/06/16 10:18:15 INFO namenode.FSNamesystem: HA Enabled: false
16/06/16 10:18:15 INFO namenode.FSNamesystem: Append Enabled: true
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map INodeMap
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^20 = 1048576 entries
16/06/16 10:18:15 INFO namenode.FSDirectory: ACLs enabled? false
16/06/16 10:18:15 INFO namenode.FSDirectory: XAttrs enabled? true
16/06/16 10:18:15 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/06/16 10:18:15 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map cachedBlocks
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^18 = 262144 entries
16/06/16 10:18:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/06/16 10:18:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/06/16 10:18:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
16/06/16 10:18:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/06/16 10:18:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/06/16 10:18:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/06/16 10:18:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/06/16 10:18:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/06/16 10:18:15 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/06/16 10:18:15 INFO util.GSet: VM type = 64-bit
16/06/16 10:18:15 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/06/16 10:18:15 INFO util.GSet: capacity = 2^15 = 32768 entries
16/06/16 10:18:15 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1368358985-127.0.1.1-1466065095377
16/06/16 10:18:15 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/hadoop_store/hdfs/namenode/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/06/16 10:18:15 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /usr/local/hadoop_store/hdfs/namenode/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/06/16 10:18:15 INFO util.ExitUtil: Exiting with status 1
16/06/16 10:18:15 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at me-Aspire-E5-574G/127.0.1.1
> Blockquote
I did what a user in the comments advised:
When I do:
hdfs namenode -format
I get the above long output.
However, when I do
sudo hdfs namenode -format
I get:
sudo: hdfs: command not found
Does that even make sense?
1 Clear the tmp folder which you set in $HADOOP_HOME/etc/hadoop/core-site.xml
2 format the namnode and datanode
$HADOOP_HOME/bin/hadoop namenode -format
$HADOOP_HOME/bin/hdfs namenode -format
$HADOOP_HOME/bin/hadoop datanode -format
$HADOOP_HOME/bin/hdfs datanode -format
3 Then start hadoop

start-dfs.sh -not working - localhost: Bad port 'localhost' (Hadoop 2.7.1)

I have installed hadoop 2.7.1 on ubuntu 14.10
When i try the command hadoop version - its working fine.
hadoop namenode -format command is also working fine
The command start-dfs.sh - not working
I am getting
Starting namenodes on [localhost]
localhost: Bad port 'localhost'
localhost: Bad port 'localhost'
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Bad Port '0.0.0.0'
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/usr/local/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/usr/local/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
`host file
127.0.0.1 localhost
127.0.1.1 hp-HP-Notebook
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
hadoop namenode -format
hp#hp-HP-Notebook:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
16/01/19 22:15:18 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hp-HP-Notebook/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG: java = 1.7.0_79
************************************************************/
16/01/19 22:15:18 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/01/19 22:15:18 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-beba2328-b534-4370-9f89-d5b3fc3c9986
16/01/19 22:15:21 INFO namenode.FSNamesystem: No KeyProvider found.
16/01/19 22:15:21 INFO namenode.FSNamesystem: fsLock is fair:true
16/01/19 22:15:21 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/01/19 22:15:21 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/01/19 22:15:21 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/01/19 22:15:21 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Jan 19 22:15:21
16/01/19 22:15:21 INFO util.GSet: Computing capacity for map BlocksMap
16/01/19 22:15:21 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:21 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/01/19 22:15:21 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/01/19 22:15:21 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/01/19 22:15:21 INFO blockmanagement.BlockManager: defaultReplication = 1
16/01/19 22:15:21 INFO blockmanagement.BlockManager: maxReplication = 512
16/01/19 22:15:21 INFO blockmanagement.BlockManager: minReplication = 1
16/01/19 22:15:21 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
16/01/19 22:15:21 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
16/01/19 22:15:21 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/01/19 22:15:21 INFO blockmanagement.BlockManager: encryptDataTransfer = false
16/01/19 22:15:21 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
16/01/19 22:15:21 INFO namenode.FSNamesystem: fsOwner = hp (auth:SIMPLE)
16/01/19 22:15:21 INFO namenode.FSNamesystem: supergroup = supergroup
16/01/19 22:15:21 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/01/19 22:15:21 INFO namenode.FSNamesystem: HA Enabled: false
16/01/19 22:15:21 INFO namenode.FSNamesystem: Append Enabled: true
16/01/19 22:15:22 INFO util.GSet: Computing capacity for map INodeMap
16/01/19 22:15:22 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:22 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/01/19 22:15:22 INFO util.GSet: capacity = 2^20 = 1048576 entries
16/01/19 22:15:22 INFO namenode.FSDirectory: ACLs enabled? false
16/01/19 22:15:22 INFO namenode.FSDirectory: XAttrs enabled? true
16/01/19 22:15:22 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/01/19 22:15:22 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/01/19 22:15:22 INFO util.GSet: Computing capacity for map cachedBlocks
16/01/19 22:15:22 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:22 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/01/19 22:15:22 INFO util.GSet: capacity = 2^18 = 262144 entries
16/01/19 22:15:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/01/19 22:15:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/01/19 22:15:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
16/01/19 22:15:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/01/19 22:15:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/01/19 22:15:22 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/01/19 22:15:22 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/01/19 22:15:22 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/01/19 22:15:22 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/01/19 22:15:22 INFO util.GSet: VM type = 64-bit
16/01/19 22:15:22 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/01/19 22:15:22 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /usr/local/hadoopdata/hdfs/namenode ? (Y or N) y
16/01/19 22:15:28 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1331619148-127.0.1.1-1453221928666
16/01/19 22:15:28 INFO common.Storage: Storage directory /usr/local/hadoopdata/hdfs/namenode has been successfully formatted.
16/01/19 22:15:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/01/19 22:15:29 INFO util.ExitUtil: Exiting with status 0
16/01/19 22:15:29 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hp-HP-Notebook/127.0.1.1
************************************************************/
try this,
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>
hadoop-env.sh
export JAVA_HOME= set path
export HADOOP_HOME= set path
yarn-site.xml
<property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
</property>
<property>
 <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
 <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
hadoop namenode -format
./start-all.sh

Neither Namenode nor datanode is starting on master of multi-node cluster

I was able to successfully start single-node cluster on 2 computers on my home network, but I am having trouble starting them as a multi-node cluster. When I run the command start-dfs.sh I get the output
hduser#eric-T5082:/usr/local/hadoop/sbin$ start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-eric-T5082.out
slave: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-Study-Linux.out
master: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-eric-T5082.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-eric-T5082.out
When I run jps, I get the following output:
hduser#eric-T5082:/usr/local/hadoop/sbin$ jps
The program 'jps' can be found in the following packages:
* openjdk-7-jdk
* openjdk-6-jdk
Try: sudo apt-get install <selected package>
yet jps is returning the correct result for the slave node:
hduser#Study-Linux:/usr/local/hadoop/etc/hadoop$ jps
6401 Jps
6300 DataNode
I suspect this may be due to (a) a port problem, i.e. the port is already occupied; (b) a problem with temporary files being generated and interfering with the hdfs namenode -format command. But I have tried to address problem (a) by trying different ports for the namenode and (b) erasing the temporary files before running hdfs.
Regarding (a), here is the result of netstat -l:
hduser#eric-T5082:/usr/local/hadoop/sbin$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 eric-T5082:domain *:* LISTEN
tcp 0 0 *:50070 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 localhost:ipp *:* LISTEN
tcp 0 0 *:50010 *:* LISTEN
tcp 0 0 *:50075 *:* LISTEN
tcp 0 0 *:50020 *:* LISTEN
tcp 0 0 localhost:52999 *:* LISTEN
tcp 0 0 master:9000 *:* LISTEN
tcp 0 0 *:50090 *:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
udp 0 0 *:36200 *:*
udp 0 0 *:19057 *:*
udp 0 0 *:ipp *:*
udp 0 0 eric-T5082:domain *:*
udp 0 0 *:bootpc *:*
udp 0 0 *:mdns *:*
udp6 0 0 [::]:mdns [::]:*
udp6 0 0 [::]:46391 [::]:*
udp6 0 0 [::]:51513 [::]:*
Here is core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
And here is mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
And finally, hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
</configuration>
HDFS appears to be working correctly
hduser#eric-T5082:/usr/local/hadoop/bin$ hdfs namenode -format
15/12/21 17:09:04 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = eric-T5082/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath = [jar files omitted]
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG: java = 1.7.0_91
************************************************************/
15/12/21 17:09:04 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/12/21 17:09:04 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-a8ee5a69-5938-434f-86de-57198465fb70
15/12/21 17:09:08 INFO namenode.FSNamesystem: No KeyProvider found.
15/12/21 17:09:08 INFO namenode.FSNamesystem: fsLock is fair:true
15/12/21 17:09:08 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/12/21 17:09:08 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/12/21 17:09:08 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/12/21 17:09:08 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Dec 21 17:09:08
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map BlocksMap
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/12/21 17:09:08 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: defaultReplication = 2
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxReplication = 512
15/12/21 17:09:08 INFO blockmanagement.BlockManager: minReplication = 1
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/12/21 17:09:08 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/12/21 17:09:08 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/12/21 17:09:08 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/12/21 17:09:08 INFO namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)
15/12/21 17:09:08 INFO namenode.FSNamesystem: supergroup = supergroup
15/12/21 17:09:08 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/12/21 17:09:08 INFO namenode.FSNamesystem: HA Enabled: false
15/12/21 17:09:08 INFO namenode.FSNamesystem: Append Enabled: true
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map INodeMap
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/12/21 17:09:08 INFO namenode.FSDirectory: ACLs enabled? false
15/12/21 17:09:08 INFO namenode.FSDirectory: XAttrs enabled? true
15/12/21 17:09:08 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
15/12/21 17:09:08 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map cachedBlocks
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^18 = 262144 entries
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/12/21 17:09:08 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/12/21 17:09:08 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/12/21 17:09:08 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/12/21 17:09:08 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/12/21 17:09:08 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/12/21 17:09:08 INFO util.GSet: VM type = 64-bit
15/12/21 17:09:08 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
15/12/21 17:09:08 INFO util.GSet: capacity = 2^15 = 32768 entries
15/12/21 17:09:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-923014467-127.0.1.1-1450746548917
15/12/21 17:09:09 INFO common.Storage: Storage directory /home/hduser/mydata/hdfs/namenode has been successfully formatted.
15/12/21 17:09:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/12/21 17:09:09 INFO util.ExitUtil: Exiting with status 0
15/12/21 17:09:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at eric-T5082/127.0.1.1
************************************************************/
Finally, here is the namenode log file
2015-12-21 17:50:09,702 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = eric-T5082/127.0.1.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath = [jar files omitted]
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG: java = 1.7.0_91
************************************************************/
2015-12-21 17:50:09,722 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-12-21 17:50:09,752 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-12-21 17:50:10,933 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-12-21 17:50:11,338 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-12-21 17:50:11,338 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-12-21 17:50:11,352 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master:9000
2015-12-21 17:50:11,353 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master:9000 to access this namenode/service.
2015-12-21 17:50:18,046 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-12-21 17:50:18,595 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-12-21 17:50:18,685 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-12-21 17:50:18,739 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-12-21 17:50:18,795 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-21 17:50:18,837 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-12-21 17:50:18,838 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-12-21 17:50:18,838 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-12-21 17:50:19,192 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-12-21 17:50:19,216 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-12-21 17:50:19,698 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-12-21 17:50:19,699 INFO org.mortbay.log: jetty-6.1.26
2015-12-21 17:50:21,961 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#0.0.0.0:50070
2015-12-21 17:50:27,119 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-21 17:50:27,119 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-21 17:50:27,277 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-12-21 17:50:27,277 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-12-21 17:50:27,385 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-12-21 17:50:27,385 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-12-21 17:50:27,388 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-12-21 17:50:27,391 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Dec 21 17:50:27
2015-12-21 17:50:27,395 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-12-21 17:50:27,396 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,399 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-12-21 17:50:27,399 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-12-21 17:50:27,425 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-12-21 17:50:27,425 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 2
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-12-21 17:50:27,426 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-12-21 17:50:27,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-12-21 17:50:27,446 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-12-21 17:50:27,585 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-12-21 17:50:27,585 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,586 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2015-12-21 17:50:27,586 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-12-21 17:50:27,596 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-12-21 17:50:27,597 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-12-21 17:50:27,624 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-12-21 17:50:27,624 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,625 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2015-12-21 17:50:27,625 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-12-21 17:50:27,630 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-12-21 17:50:27,663 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-12-21 17:50:27,860 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-12-21 17:50:27,860 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-12-21 17:50:27,890 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-12-21 17:50:27,890 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-12-21 17:50:27,891 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2015-12-21 17:50:27,891 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries
2015-12-21 17:50:27,992 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hduser/mydata/hdfs/namenode/in_use.lock acquired by nodename 20222#eric-T5082
2015-12-21 17:50:28,411 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hduser/mydata/hdfs/namenode/current
2015-12-21 17:50:28,891 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hduser/mydata/hdfs/namenode/current/edits_inprogress_0000000000000000003 -> /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003
2015-12-21 17:50:29,189 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-12-21 17:50:29,311 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-12-21 17:50:29,311 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 2 from /home/hduser/mydata/hdfs/namenode/current/fsimage_0000000000000000002
2015-12-21 17:50:29,312 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream#1610d6ac expecting start txid #3
2015-12-21 17:50:29,312 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003
2015-12-21 17:50:29,319 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003' to transaction ID 3
2015-12-21 17:50:29,333 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /home/hduser/mydata/hdfs/namenode/current/edits_0000000000000000003-0000000000000000003 of size 1048576 edits # 1 loaded in 0 seconds
2015-12-21 17:50:29,360 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-12-21 17:50:29,362 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 4
2015-12-21 17:50:29,714 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-12-21 17:50:29,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 1808 msecs
2015-12-21 17:50:32,500 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to master:9000
2015-12-21 17:50:32,561 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-12-21 17:50:32,632 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-12-21 17:50:32,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-12-21 17:50:32,940 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2015-12-21 17:50:32,941 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2015-12-21 17:50:32,941 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2015-12-21 17:50:32,948 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 5 secs
2015-12-21 17:50:32,949 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2015-12-21 17:50:32,949 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2015-12-21 17:50:32,996 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2015-12-21 17:50:33,020 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2015-12-21 17:50:33,021 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2015-12-21 17:50:33,021 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 61 msec
2015-12-21 17:50:33,239 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: master/192.168.1.120:9000
2015-12-21 17:50:33,239 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2015-12-21 17:50:33,230 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-12-21 17:50:33,234 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2015-12-21 17:50:33,281 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2015-12-21 17:50:35,393 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.1.109:50010, datanodeUuid=e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0) storage e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3
2015-12-21 17:50:35,394 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:35,401 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.109:50010
2015-12-21 17:50:35,818 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:35,818 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-b4ddb959-74db-409c-b65f-b940d01b5ec3 for DN 192.168.1.109:50010
2015-12-21 17:50:36,101 INFO BlockStateChange: BLOCK* processReport: from storage DS-b4ddb959-74db-409c-b65f-b940d01b5ec3 node DatanodeRegistration(192.168.1.109:50010, datanodeUuid=e33c9d91-19c9-4e7f-85a3-e6fe5105b2d3, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0), blocks: 0, hasStaleStorage: false, processing time: 9 msecs
2015-12-21 17:50:38,406 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.1.120:50010, datanodeUuid=ab241604-21db-4c11-91c7-5271d42f9ffa, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0) storage ab241604-21db-4c11-91c7-5271d42f9ffa
2015-12-21 17:50:38,406 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:38,407 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.120:50010
2015-12-21 17:50:38,560 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2015-12-21 17:50:38,560 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-cd7c7489-dcac-4028-ac7a-a883ad1319da for DN 192.168.1.120:50010
2015-12-21 17:50:38,666 INFO BlockStateChange: BLOCK* processReport: from storage DS-cd7c7489-dcac-4028-ac7a-a883ad1319da node DatanodeRegistration(192.168.1.120:50010, datanodeUuid=ab241604-21db-4c11-91c7-5271d42f9ffa, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-a8ee5a69-5938-434f-86de-57198465fb70;nsid=2101567829;c=0), blocks: 0, hasStaleStorage: false, processing time: 1 msecs

ArrayIndexOutOfBoundsException at MapOutputBuffer$Buffer.write in MapTask (Hadoop 2.7.1)

Very odd case of ArrayIndexOutOfBounds in a Scalding-driven job running on Hadoop 2.7.1. Mapper log dump below. It looks like Equator somehow gets set to a negative number in spill 2. Is this normal?
2015-08-12 23:39:19,649 INFO [main] org.apache.hadoop.mapred.MapTask: numReduceTasks: 1
2015-08-12 23:39:20,174 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 469762044(1879048176)
2015-08-12 23:39:20,175 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 1792
2015-08-12 23:39:20,175 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 187904816
2015-08-12 23:39:20,175 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 1879048192
2015-08-12 23:39:20,175 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 469762044; length = 117440512
2015-08-12 23:39:20,214 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2015-08-12 23:39:20,216 INFO [main] cascading.flow.hadoop.FlowMapper: cascading version: 2.6.1
2015-08-12 23:39:20,216 INFO [main] cascading.flow.hadoop.FlowMapper: child jvm opts: -Xmx1024m -Djava.io.tmpdir=./tmp
2015-08-12 23:39:20,516 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
2015-08-12 23:39:20,552 INFO [main] cascading.flow.hadoop.FlowMapper: sourcing from: TempHfs["SequenceFile[['docId', 'otherDocId', 'score']]"][9909013673/_pipe_11__pipe_12/]
2015-08-12 23:39:20,552 INFO [main] cascading.flow.hadoop.FlowMapper: sinking to: GroupBy(_pipe_11+_pipe_12)[by:[
{1}
:'docId']]
2015-08-12 23:39:29,424 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output
2015-08-12 23:39:29,424 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 108647886; bufvoid = 1879048192
2015-08-12 23:39:29,424 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 469762044(1879048176); kvend = 449947816(1799791264); length = 19814229/117440512
2015-08-12 23:39:29,425 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 839953118 kvi 209988272(839953088)
2015-08-12 23:39:43,985 INFO [SpillThread] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.gz]
2015-08-12 23:39:46,767 INFO [SpillThread] org.apache.hadoop.mapred.MapTask: Finished spill 0
2015-08-12 23:39:46,767 INFO [main] org.apache.hadoop.mapred.MapTask: (RESET) equator 839953118 kv 209988272(839953088) kvi 178264648(713058592)
2015-08-12 23:39:46,767 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output
2015-08-12 23:39:46,767 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 839953118; bufend = 1014433072; bufvoid = 1879048192
2015-08-12 23:39:46,767 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 209988272(839953088); kvend = 178264648(713058592); length = 31723625/117440512
2015-08-12 23:39:46,767 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 1696670336 kvi 424167580(1696670320)
2015-08-12 23:40:22,641 INFO [SpillThread] org.apache.hadoop.mapred.MapTask: Finished spill 1
2015-08-12 23:40:22,641 INFO [main] org.apache.hadoop.mapred.MapTask: (RESET) equator 1696670336 kv 424167580(1696670320) kvi 392768808(1571075232)
2015-08-12 23:40:22,641 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output
2015-08-12 23:40:22,641 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 1696670336; bufend = 1869363604; bufvoid = 1879048192
2015-08-12 23:40:22,641 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 424167580(1696670320); kvend = 392768808(1571075232); length = 31398773/117440512
2015-08-12 23:40:22,642 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) -1742031900 kvi 34254072(137016288)
2015-08-12 23:40:47,329 INFO [SpillThread] org.apache.hadoop.mapred.MapTask: Finished spill 2
2015-08-12 23:40:47,330 INFO [main] org.apache.hadoop.mapred.MapTask: (RESET) equator -1742031900 kv 34254072(137016288) kvi 34254072(137016288)
2015-08-12 23:40:47,331 ERROR [main] cascading.flow.stream.TrapHandler: caught Throwable, no trap available, rethrowing
cascading.flow.stream.DuctException: internal error: ['7541904654925238223', '2.812180059539485']
at cascading.flow.hadoop.stream.HadoopGroupByGate.receive(HadoopGroupByGate.java:81)
at cascading.flow.hadoop.stream.HadoopGroupByGate.receive(HadoopGroupByGate.java:37)
at cascading.flow.stream.FunctionEachStage$1.collect(FunctionEachStage.java:80)
at cascading.tuple.TupleEntryCollector.safeCollect(TupleEntryCollector.java:145)
at cascading.tuple.TupleEntryCollector.add(TupleEntryCollector.java:133)
at cascading.operation.Identity$2.operate(Identity.java:137)
at cascading.operation.Identity.operate(Identity.java:150)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:99)
at cascading.flow.stream.FunctionEachStage.receive(FunctionEachStage.java:39)
at cascading.flow.stream.SourceStage.map(SourceStage.java:102)
at cascading.flow.stream.SourceStage.run(SourceStage.java:58)
at cascading.flow.hadoop.FlowMapper.run(FlowMapper.java:130)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ArrayIndexOutOfBoundsException
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1453)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1349)
at java.io.DataOutputStream.write(DataOutputStream.java:88)
at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
at org.apache.hadoop.io.WritableUtils.writeVLong(WritableUtils.java:273)
at org.apache.hadoop.io.WritableUtils.writeVInt(WritableUtils.java:253)
at cascading.tuple.hadoop.io.HadoopTupleOutputStream.writeIntInternal(HadoopTupleOutputStream.java:155)
at cascading.tuple.io.TupleOutputStream.write(TupleOutputStream.java:86)
at cascading.tuple.io.TupleOutputStream.writeTuple(TupleOutputStream.java:64)
at cascading.tuple.hadoop.io.TupleSerializer.serialize(TupleSerializer.java:37)
at cascading.tuple.hadoop.io.TupleSerializer.serialize(TupleSerializer.java:28)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1149)
at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:610)
at cascading.tap.hadoop.util.MeasuredOutputCollector.collect(MeasuredOutputCollector.java:69)
at cascading.flow.hadoop.stream.HadoopGroupByGate.receive(HadoopGroupByGate.java:68)
... 18 more
It is mapreduce.task.io.sort.mb that made the difference. When setting to 2G or large, it will constantly running into the problem.
It is suggested to set to the value below or smaller:
Dmapreduce.task.io.sort.mb=1792
I suspect a threading issue, so I tried the below and it worked. Not sure if the cure will stick.
<property>
<name>mapreduce.map.sort.spill.percent</name>
<value>0.8</value>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>10</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>100</value>
</property>
<property>
<name>mapred.map.multithreadedrunner.threads</name>
<value>1</value>
</property>
<property>
<name>mapreduce.mapper.multithreadedmapper.threads</name>
<value>1</value>
</property>

Resources