Windows install hadoop fail to format - hadoop

I don't know why this is error is happening in Hadoop version 2.7.1.
$ ./hadoop namenode –format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
15/12/31 22:26:34 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ZKCRJB84CNJ0ZTJ/172.29.66.77
STARTUP_MSG: args = [▒Cformat]
STARTUP_MSG: version = 2.7.1
STARTUP_MSG: classpath =
C:\hadoop\etc\hadoop;C:\hadoop\share\hadoop\common\li b\activation-1.1.jar;C:\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.j ar;
C:\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;
leave out
C:\hadoop\share\hadoop\ mapreduce\hadoop-mapreduce-client-common-2.7.1.jar;C:\hadoop\share\hadoop\mapred uce\hadoop-mapreduce-client-core-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hado op-mapreduce-client-hs-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapredu ce-client-hs-plugins-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce -client-jobclient-2.7.1-tests.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapred uce-client-jobclient-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce -client-shuffle-2.7.1.jar;C:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-exam ples-2.7.1.jar;C:\cygwin\contrib\capacity-scheduler\*.jar;C;C:\hadoop\contrib\ca pacity-scheduler\*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15e cc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04 Z
STARTUP_MSG: java = 1.8.0_51
************************************************************/
15/12/31 22:26:34 INFO namenode.NameNode: createNameNode [▒Cformat]
Usage: java NameNode [-backup] |
[-checkpoint] |
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-rollback] |
[-rollingUpgrade <rollback|downgrade|started> ] |
[-finalize] |
[-importCheckpoint] |
[-initializeSharedEdits] |
[-bootstrapStandby] |
[-recover [ -force] ] |
[-metadataVersion ] ]
15/12/31 22:26:34 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ZKCRJB84CNJ0ZTJ/172.29.66.77
************************************************************/

Based on your args = [▒Cformat], use the correct dash sign - not —.

Related

Flume agent does not contain any valid channels

I am new to Flume. Im trying to pull data from Twitter, but I am not being successful. (I am using Cloudera Quickstart)
My conf file looks like this:
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
I have added all these values which are taken from Twitter account consumerKey, consumerSecret, accessToken ,accessTokenSecret,keywords and path also
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollsize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
The command I am using to execute the conf file is:
flume-ng agent --conf conf --conf-file flume.conf -Dflume.root.logger=DEBUG,console -name TwitterAgent
The error I am getting is:
18/06/27 12:17:18 WARN conf.FlumeConfiguration: Agent configuration for 'TwitterAgent' does not contain any valid channels. Marking it as invalid.
18/06/27 12:17:18 WARN conf.FlumeConfiguration: Agent configuration invalid for agent 'TwitterAgent'. It will be removed.
18/06/27 12:17:18 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: []
18/06/27 12:17:18 WARN node.AbstractConfigurationProvider: No configuration found for this host:TwitterAgent
18/06/27 12:17:18 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
Please advice me.
I think you have an issue in your execution command, the error is about finding the configuration file.
Command should be
flume-ng agent -c conf -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n TwitterAgent
You have to specify your configuration file path. You can try -f conf/flume.conf instead of -f flume.conf

Getting NullPonterException while trying to stop apache-activemq?

I am unable to shutdown my activemq gracefully after enabling jmx. Please help and tell me what am I doing wrong. Here is what I am trying to do.
start activemq:-
[mwapp#JMNGD1BAO150V02 ~]$ /app/apache-activemq-5.14.0/bin/activemq start xbean:/app/apache-activemq-5.14.0/conf/activemq-security.xml
INFO: Loading '/app/apache-activemq-5.14.0//bin/env'
INFO: Using java '/usr/java/jre1.7.0_79//bin/java'
INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details
INFO: pidfile created : '/app/apache-activemq-5.14.0//data/activemq.pid' (pid '16917')
activemq.log:- To me it's looking fine
2017-10-12 13:48:18,936 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#2142b533: startup date [Thu Oct 12 13:48:18 IST 2017]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2017-10-12 13:48:20,008 | INFO | Loading properties file from URL [file:/app/apache-activemq-5.14.0//conf/credentials-enc.properties] | org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer | main
2017-10-12 13:48:20,975 | INFO | Loaded the Bouncy Castle security provider. | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:21,283 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/jms_nas/kahadb] | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:21,308 | INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi | org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2017-10-12 13:48:21,594 | INFO | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,653 | INFO | Recovering from the journal #1105:27118028 | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,657 | INFO | Recovery replayed 58 operations from the journal in 0.046 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2017-10-12 13:48:21,719 | INFO | PListStore:[/app/apache-activemq-5.14.0/data/localhost/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2017-10-12 13:48:21,903 | INFO | Apache ActiveMQ 5.14.0 (localhost, ID:JMNGD1BAO150V02-59661-1507796301746-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,786 | INFO | Listening for connections at: ssl://JMNGD1BAO150V02:61616?needClientAuth=true&maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2017-10-12 13:48:22,787 | INFO | Connector ssl started | org.apache.activemq.broker.TransportConnector | main
2017-10-12 13:48:22,787 | INFO | Apache ActiveMQ 5.14.0 (localhost, ID:JMNGD1BAO150V02-59661-1507796301746-0:1) started | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,787 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:22,788 | WARN | Store limit is 102400 mb (current store usage is 1397 mb). The data directory: /jms_nas/kahadb only has 91534 mb of usable space. - resetting to maximum available disk space: 91534 mb | org.apache.activemq.broker.BrokerService | main
2017-10-12 13:48:23,646 | INFO | No Spring WebApplicationInitializer types detected on classpath | /admin | main
2017-10-12 13:48:23,755 | INFO | ActiveMQ WebConsole available at http://localhost:8161/ | org.apache.activemq.web.WebConsoleStarter | main
2017-10-12 13:48:23,755 | INFO | ActiveMQ Jolokia REST API available at http://localhost:8161/api/jolokia/ | org.apache.activemq.web.WebConsoleStarter | main
2017-10-12 13:48:23,799 | INFO | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
2017-10-12 13:48:24,068 | INFO | No Spring WebApplicationInitializer types detected on classpath | /api | main
2017-10-12 13:48:24,185 | INFO | jolokia-agent: Using policy access restrictor classpath:/jolokia-access.xml | /api | main
stop activemq:-
[mwapp#JMNGD1BAO150V02 ~]$ /app/apache-activemq-5.14.0/bin/activemq stop xbean:/app/apache-activemq-5.14.0/conf/activemq-security.xml
INFO: Loading '/app/apache-activemq-5.14.0//bin/env'
INFO: Using java '/usr/java/jre1.7.0_79//bin/java'
INFO: Waiting at least 30 seconds for regular process termination of pid '16917' :
Java Runtime: Oracle Corporation 1.7.0_79 /usr/java/jre1.7.0_79
Heap sizes: current=63488k free=61608k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/app/apache-activemq-5.14.0//conf/login.config -Dactivemq.classpath=/app/apache-activemq-5.14.0//conf:/app/apache-activemq-5.14.0//../lib/: -Dactivemq.home=/app/apache-activemq-5.14.0/ -Dactivemq.base=/app/apache-activemq-5.14.0/ -Dactivemq.conf=/app/apache-activemq-5.14.0//conf -Dactivemq.data=/app/apache-activemq-5.14.0//data
Extensions classpath:
[/app/apache-activemq-5.14.0/lib,/app/apache-activemq-5.14.0/lib/camel,/app/apache-activemq-5.14.0/lib/optional,/app/apache-activemq-5.14.0/lib/web,/app/apache-activemq-5.14.0/lib/extra]
ACTIVEMQ_HOME: /app/apache-activemq-5.14.0
ACTIVEMQ_BASE: /app/apache-activemq-5.14.0
ACTIVEMQ_CONF: /app/apache-activemq-5.14.0/conf
ACTIVEMQ_DATA: /app/apache-activemq-5.14.0/data
Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
ERROR: java.lang.NullPointerException
java.lang.NullPointerException
at org.apache.activemq.console.command.AbstractCommand.handleException(AbstractCommand.java:167)
at org.apache.activemq.console.command.AbstractJmxCommand.execute(AbstractJmxCommand.java:390)
at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:154)
at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)
at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.activemq.console.Main.runTaskClass(Main.java:262)
at org.apache.activemq.console.Main.main(Main.java:115)
...............................
INFO: Regular shutdown not successful, sending SIGKILL to process
INFO: sending SIGKILL to pid '16917'
As you can see, the system is forcefully shutting down using pid which is not expected. I am currently using apache-activemq-5.14.0 and the configuration file will look something like below. I am not sure why activemq gave two separate file to enable JMX i.e. env and activemq-security.xml. Or the env file has some different role to play. I read the documentation, and I got more confuse when they mention from V.5.12.0 onwards they are supporting OCSP. Do I need to enable that too?
${ACTIVEMQ_HOME}/bin/env
#!/bin/sh
# Active MQ installation dirs
# ACTIVEMQ_HOME="<Installationdir>/"
# ACTIVEMQ_BASE="$ACTIVEMQ_HOME"
# ACTIVEMQ_CONF="$ACTIVEMQ_BASE/conf"
# ACTIVEMQ_DATA="$ACTIVEMQ_BASE/data"
# ACTIVEMQ_TMP="$ACTIVEMQ_BASE/tmp"
ACTIVEMQ_OPTS_MEMORY="-Xms64M -Xmx1G"
if [ -z "$ACTIVEMQ_OPTS" ] ; then
ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS_MEMORY -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=$ACTIVEMQ_CONF/login.config"
fi
#ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.audit=true"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.port=1099"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_CONF}/jmx.password"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_CONF}/jmx.access"
# ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.ssl=true"
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote"
#ACTIVEMQ_SUNJMX_CONTROL="--jmxurl service:jmx:rmi:///jndi/rmi://127.0.0.1:1099/jmxrmi --jmxuser controlRole --jmxpassword abcd1234"
ACTIVEMQ_SUNJMX_CONTROL=""
if [ -z "$ACTIVEMQ_QUEUEMANAGERURL" ]; then
ACTIVEMQ_QUEUEMANAGERURL="--amqurl tcp://localhost:61616"
fi
if [ -z "$ACTIVEMQ_SSL_OPTS" ] ; then
#ACTIVEMQ_SSL_OPTS="-Djava.security.properties=$ACTIVEMQ_CONF/java.security"
ACTIVEMQ_SSL_OPTS=""
fi
#ACTIVEMQ_DEBUG_OPTS="-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"
if [ -z "$ACTIVEMQ_KILL_MAXSECONDS" ]; then
ACTIVEMQ_KILL_MAXSECONDS=30
fi
ACTIVEMQ_USER=""
# ACTIVEMQ_PIDFILE="$ACTIVEMQ_DATA/activemq.pid"
JAVA_HOME="/usr/java/jre1.7.0_79/"
${ACTIVEMQ_HOME}/conf/activemq-security.xml
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
----------------------------------------------
<managementContext>
<!--managementContext createConnector="true" connectorPort="1099"/-->
<managementContext createConnector="true">
<property xmlns="http://www.springframework.org/schema/beans" name="environment">
<map xmlns="http://www.springframework.org/schema/beans">
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.password.file" value="${activemq.base}/conf/jmx.password"/>
<entry xmlns="http://www.springframework.org/schema/beans" key="jmx.remote.x.access.file" value="${activemq.base}/conf/jmx.access"/>
</map>
</property>
</managementContext>
</managementContext>
----------------------------------------------
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook"/>
</shutdownHooks>
</broker>

error while executing pig script?

p.pig contains follwoing code
salaries= load 'salaries' using PigStorage(',') As (gender, age,salary,zip);
salaries= load 'salaries' using PigStorage(',') As (gender:chararray,age:int,salary:double,zip:long);
salaries=load 'salaries' using PigStorage(',') as (gender:chararray,details:bag{b(age:int,salary:double,zip:long)});
highsal= filter salaries by salary > 75000;
dump highsal
salbyage= group salaries by age;
describe salbyage;
salbyage= group salaries All;
salgrp= group salaries by $3;
A= foreach salaries generate age,salary;
describe A;
salaries= load 'salaries.txt' using PigStorage(',') as (gender:chararray,age:int,salary:double,zip:int);
vivek#ubuntu:~/Applications/Hadoop_program/pip$ pig -x mapreduce p.pig
15/09/24 03:16:32 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/09/24 03:16:32 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/09/24 03:16:32 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-09-24 03:16:32,990 [main] INFO org.apache.pig.Main - Apache Pig version 0.14.0 (r1640057) compiled Nov 16 2014, 18:02:05
2015-09-24 03:16:32,991 [main] INFO org.apache.pig.Main - Logging error messages to: /home/vivek/Applications/Hadoop_program/pip/pig_1443089792987.log
2015-09-24 03:16:38,966 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-09-24 03:16:41,232 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/vivek/.pigbootup not found
2015-09-24 03:16:42,869 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-09-24 03:16:42,870 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-09-24 03:16:42,870 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost:9000
2015-09-24 03:16:45,436 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Encountered " <PATH> "salaries=load "" at line 7, column 1.
Was expecting one of:
<EOF>
"cat" ...
"clear" ...
"fs" ...
"sh" ...
"cd" ...
"cp" ...
"copyFromLocal" ...
"copyToLocal" ...
"dump" ...
"\\d" ...
"describe" ...
"\\de" ...
"aliases" ...
"explain" ...
"\\e" ...
"help" ...
"history" ...
"kill" ...
"ls" ...
"mv" ...
"mkdir" ...
"pwd" ...
"quit" ...
"\\q" ...
"register" ...
"rm" ...
"rmf" ...
"set" ...
"illustrate" ...
"\\i" ...
"run" ...
"exec" ...
"scriptDone" ...
"" ...
"" ...
<EOL> ...
";" ...
Details at logfile: /home/vivek/Applications/Hadoop_program/pip/pig_1443089792987.log
2015-09-24 03:16:45,554 [main] INFO org.apache.pig.Main - Pig script completed in 13 seconds and 48 milliseconds (13048 ms)
vivek#ubuntu:~/Applications/Hadoop_program/pip$
Here at starting p.pig comprised of the code give above.
i'm started my pig in mapreduce mode.
while executing above code it encounters following error:
ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Encountered " "salaries=load "" at line 7, column 1.
please try to resolve the error .
You have not provided spaces between alias name and command.
Pig expects atleast on space before or after '=' operator.
Change this line :
salaries=load 'salaries' using PigStorage(',') as (gender:chararray,details:bag{b(age:int,salary:double,zip:long)});
TO
salaries = load 'salaries' using PigStorage(',') as (gender:chararray,details:bag{b(age:int,salary:double,zip:long)});

Fail to load data to Hortonworks Sandbox in Pig

Hi I am very neophyte to hadoop and when I first run this command
LOAD 'Pig/iris.csv' using PigStorage (',') the error popped out:
LOAD 'Pig/iris.csv' using PigStorage (',');
2014-09-05 06:04:04,853 [main] INFO org.apache.pig.Main - Apache Pig version 0.12.1.2.1.1.0-385 (rexported) compiled Apr 16 2014, 15:59:00
2014-09-05 06:04:04,885 [main] INFO org.apache.pig.Main - Logging error messages to: /dev/null
2014-09-05 06:04:07,077 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /usr/lib/hue/.pigbootup not found
2014-09-05 06:04:14,699 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2014-09-05 06:04:14,699 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2014-09-05 06:04:14,699 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox.hortonworks.com:8020
2014-09-05 06:05:11,826 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
grunt> LOAD 'Pig/iris.csv' using PigStorage (',');
2014-09-05 06:05:13,203 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Encountered " <IDENTIFIER> "LOAD "" at line 1, column 1.
Was expecting one of:
<EOF>
"cat" ...
"clear" ...
"fs" ...
"sh" ...
"cd" ...
"cp" ...
"copyFromLocal" ...
"copyToLocal" ...
"dump" ...
"\\d" ...
"describe" ...
"\\de" ...
"aliases" ...
"explain" ...
"\\e" ...
"help" ...
"history" ...
"kill" ...
"ls" ...
"mv" ...
"mkdir" ...
"pwd" ...
"quit" ...
"\\q" ...
"register" ...
"rm" ...
"rmf" ...
"set" ...
"illustrate" ...
"\\i" ...
"run" ...
"exec" ...
"scriptDone" ...
"" ...
"" ...
<EOL> ...
";" ...
Details at logfile: /dev/null
Does anyone know how to solve the problem?
LOAD creates a relation. You need to assign that to a variable so that you can do something with it later:
L = LOAD 'Pig/iris.csv' using PigStorage (',');
DUMP L;

Hadoop in windows : file not found exception

I'm using hadoop in windows and i've configured everything good (installing cygwin, passwordless ssh etc..)
I've compiled the wordcount program in WC.jar and tried to run. Its running perfectly in standalone mode.. but in fully distributed mode it gives FileNotFoundException
Please look into the logs and tel me what is wrong with it.
i've started the dfs and mapreduce in the MACH1. (thats my master)
$ bin/hadoop jar WC.jar WordCount words result
10/07/24 16:57:38 INFO input.FileInputFormat: Total input paths to process : 2
10/07/24 16:57:39 INFO mapred.JobClient: Running job: job_201007241657_0001
10/07/24 16:57:40 INFO mapred.JobClient: map 0% reduce 0%
10/07/24 16:57:50 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_m_0
00003_0, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-328510/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_m_000003_0/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:57:55 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_r_0
00002_0, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-328510/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_r_000002_0/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:58:07 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_m_0
00003_1, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-SYSTEM/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_m_000003_1/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:58:14 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_m_0
00003_2, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-SYSTEM/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_m_000003_2/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:58:26 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_m_0
00002_0, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-SYSTEM/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_m_000002_0/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:58:34 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_r_0
00001_0, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-SYSTEM/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_r_000001_0/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:58:41 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_m_0
00002_1, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-328510/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_m_000002_1/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:58:47 INFO mapred.JobClient: Task Id : attempt_201007241657_0001_m_0
00002_2, Status : FAILED
java.io.FileNotFoundException: File C:/tmp/hadoop-328510/mapred/local/taskTracke
r/jobcache/job_201007241657_0001/attempt_201007241657_0001_m_000002_2/work/tmp d
oes not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
tem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
java:245)
at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
at org.apache.hadoop.mapred.Child.main(Child.java:155)
10/07/24 16:58:53 INFO mapred.JobClient: Job complete: job_201007241657_0001
10/07/24 16:58:53 INFO mapred.JobClient: Counters: 0
328510#01HW179531 /usr/local/hadoop-0.20.2
$`
Thanks.
I think I might have seen this exception before but I don't have access to my old logs to confirm it. I solved my FileNotFoundException by reformatting the namenode. You might want to check the namenode logs for "inconsistent state" to confirm the cause before reformatting.

Resources