Hadoop 2.6.4: 1 master + 2 slaves on AWS EC2
master: namenode, secondary namenode, resource manager
slave: datanode, node manager
When running a test MR job (wordcount), it freezes right away:
hduser#ip-172-31-4-108:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /data/shakespeare /data/out1
16/03/21 10:45:19 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-4-108/172.31.4.108:8032
16/03/21 10:45:21 INFO input.FileInputFormat: Total input paths to process : 5
16/03/21 10:45:21 INFO mapreduce.JobSubmitter: number of splits:5
16/03/21 10:45:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1458556970596_0001
16/03/21 10:45:22 INFO impl.YarnClientImpl: Submitted application application_1458556970596_0001
16/03/21 10:45:22 INFO mapreduce.Job: The url to track the job: http://ip-172-31-4-108:8088/proxy/application_1458556970596_0001/
16/03/21 10:45:22 INFO mapreduce.Job: Running job: job_1458556970596_0001
When running start-dfs.sh and start-yarn.sh on master, all daemons run succesfully (jps command) on corresponding EC2 instance.
Below Resource Manager log when launching MR job:
2016-03-21 10:45:20,152 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
2016-03-21 10:45:22,784 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user hduser
2016-03-21 10:45:22,785 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1458556970596_0001
2016-03-21 10:45:22,787 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser IP=172.31.4.108 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1458556970596_0001
2016-03-21 10:45:22,788 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from NEW to NEW_SAVING
2016-03-21 10:45:22,805 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1458556970596_0001
2016-03-21 10:45:22,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from NEW_SAVING to SUBMITTED
2016-03-21 10:45:22,809 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1458556970596_0001 user: hduser leaf-queue of parent: root #applications: 1
2016-03-21 10:45:22,810 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1458556970596_0001 from user: hduser, in queue: default
2016-03-21 10:45:22,825 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from SUBMITTED to ACCEPTED
2016-03-21 10:45:22,866 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1458556970596_0001_000001
2016-03-21 10:45:22,867 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458556970596_0001_000001 State change from NEW to SUBMITTED
2016-03-21 10:45:22,896 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start
2016-03-21 10:45:22,896 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start
2016-03-21 10:45:22,897 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1458556970596_0001 from user: hduser activated in queue: default
2016-03-21 10:45:22,898 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1458556970596_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User#1d51055, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-03-21 10:45:22,898 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1458556970596_0001_000001 to scheduler from user hduser in queue default
2016-03-21 10:45:22,900 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458556970596_0001_000001 State change from SUBMITTED to SCHEDULED
Below NameNode log when launching MR job:
2016-03-21 10:45:03,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2016-03-21 10:45:03,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:45:20,613 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 7
2016-03-21 10:45:20,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar. BP-1804768821-172.31.4.108-1458553823105 blk_1073741834_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:21,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* checkFileProgress: blk_1073741834_1010{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} has not reached minimal replication 1
2016-03-21 10:45:21,292 INFO org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream: Nothing to flush
2016-03-21 10:45:21,297 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741834_1010{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 270356
2016-03-21 10:45:21,297 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741834_1010 size 270356
2016-03-21 10:45:21,706 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:21,714 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 2 to 10 for /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar
2016-03-21 10:45:21,812 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 2 to 10 for /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split
2016-03-21 10:45:21,823 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split. BP-1804768821-172.31.4.108-1458553823105 blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]}
2016-03-21 10:45:21,849 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]} size 0
2016-03-21 10:45:21,853 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]} size 0
2016-03-21 10:45:21,855 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:21,865 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.splitmetainfo. BP-1804768821-172.31.4.108-1458553823105 blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:21,876 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:21,877 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:21,880 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:22,277 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.xml. BP-1804768821-172.31.4.108-1458553823105 blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:22,327 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:22,328 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:22,332 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.xml is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:33,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:45:33,747 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:46:03,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:46:03,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:46:33,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:46:33,749 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:47:03,749 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2016-03-21 10:47:03,750 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
Any ideas ? thank you in advance for your support !.
Below *-site.xml files content. Note: I've indeed applied some dimensioning results values to properties, but I still had the EXACT SAME issue with minimal configuration (only mandatory properties).
core-site.xml
<configuration>
<property><name>fs.defaultFS</name><value>hdfs://ip-172-31-4-108:8020</value></property>
</configuration>
hdfs-site.xml
<configuration>
<property><name>dfs.replication</name><value>2</value></property>
<property><name>dfs.namenode.name.dir</name><value>file:///xvda1/dfs/nn</value></property>
<property><name>dfs.datanode.data.dir</name><value>file:///xvda1/dfs/dn</value></property>
</configuration>
mapred-site.xml
<configuration>
<property><name>mapreduce.jobhistory.address</name><value>ip-172-31-4-108:10020</value></property>
<property><name>mapreduce.jobhistory.webapp.address</name><value>ip-172-31-4-108:19888</value></property>
<property><name>mapreduce.framework.name</name><value>yarn</value></property>
<property><name>mapreduce.map.memory.mb</name><value>512</value></property>
<property><name>mapreduce.reduce.memory.mb</name><value>1024</value></property>
<property><name>mapreduce.map.java.opts</name><value>410</value></property>
<property><name>mapreduce.reduce.java.opts</name><value>820</value></property>
</configuration>
yarn-site.xml
<configuration>
<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
<property><name>yarn.resourcemanager.hostname</name><value>ip-172-31-4-108</value></property>
<property><name>yarn.nodemanager.local-dirs</name><value>file:///xvda1/nodemgr/local</value></property>
<property><name>yarn.nodemanager.log-dirs</name><value>/var/log/hadoop-yarn/containers</value></property>
<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/var/log/hadoop-yarn/apps</value></property>
<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
<property><name>yarn.app.mapreduce.am.resource.mb</name><value>1024</value></property>
<property><name>yarn.app.mapreduce.am.command-opts</name><value>820</value></property>
<property><name>yarn.nodemanager.resource.memory-mb</name><value>6291456</value></property>
<property><name>yarn.scheduler.minimum_allocation-mb</name><value>524288</value></property>
<property><name>yarn.scheduler.maximum_allocation-mb</name><value>6291456</value></property>
</configuration>
I am just trying to display the result of GROUPed records using DUMP, but instead of displaying the data, there are lots of log data. I am just playing with 10 records.
The details:
grunt> DUMP grouped_records;
2016-02-21 17:34:24,338 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY,FILTER
2016-02-21 17:34:24,339 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, PartitionFilterOptimizer]}
2016-02-21 17:34:24,354 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2016-02-21 17:34:24,374 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2016-02-21 17:34:24,374 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2016-02-21 17:34:24,434 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2016-02-21 17:34:24,440 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2016-02-21 17:34:24,527 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2016-02-21 17:34:24,530 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Reduce phase detected, estimating # of required reducers.
2016-02-21 17:34:24,534 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2016-02-21 17:34:24,541 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=142
2016-02-21 17:34:24,541 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2016-02-21 17:34:25,128 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job662989067023626482.jar
2016-02-21 17:34:31,290 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job662989067023626482.jar created
2016-02-21 17:34:31,335 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2016-02-21 17:34:31,338 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2016-02-21 17:34:31,338 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
2016-02-21 17:34:31,338 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
2016-02-21 17:34:31,549 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2016-02-21 17:34:31,550 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-02-21 17:34:31,556 [JobControl] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2016-02-21 17:34:31,607 [JobControl] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2016-02-21 17:34:31,918 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2016-02-21 17:34:31,918 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2016-02-21 17:34:31,921 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2016-02-21 17:34:31,979 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2016-02-21 17:34:32,092 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1454294818944_0034
2016-02-21 17:34:32,192 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1454294818944_0034
2016-02-21 17:34:32,198 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://quickstart.cloudera:8088/proxy/application_1454294818944_0034/
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1454294818944_0034
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases filtered_records,grouped_records,records
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: records[1,10],records[-1,-1],filtered_records[2,19],grouped_records[3,18] C: R:
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://localhost:50030/jobdetails.jsp?jobid=job_1454294818944_0034
2016-02-21 17:34:32,428 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2016-02-21 17:35:02,623 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
2016-02-21 17:35:23,469 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2016-02-21 17:35:23,470 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.6.0-cdh5.5.0 0.12.0-cdh5.5.0 cloudera 2016-02-21 17:34:24 2016-02-21 17:35:23 GROUP_BY,FILTER
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1454294818944_0034 1 1 12 12 12 12 16 16 16 16 filtered_records,grouped_records,records GROUP_BY hdfs://quickstart.cloudera:8020/tmp/temp-1703423271/tmp-988597361,
Input(s):
Successfully read 10 records (525 bytes) from: "/user/hduser/input/maxtemppig.tsv"
Output(s):
Successfully stored 0 records in: "hdfs://quickstart.cloudera:8020/tmp/temp-1703423271/tmp-988597361"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1454294818944_0034
2016-02-21 17:35:23,646 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2016-02-21 17:35:23,648 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2016-02-21 17:35:23,648 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-02-21 17:35:23,649 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2016-02-21 17:35:23,660 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2016-02-21 17:35:23,660 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
Commands that I tried:
records = LOAD '/user/hduser/input/maxtemppig.tsv' AS (year:chararray, temperature:int, quality:int);
filtered_records = FILTER records BY temperature IN (-10,19) AND quality IN (0,1,4,5,9);
DUMP filtered_records;
grouped_records = GROUP filtered_records BY year;
DUMP grouped_records;
max_temp = FOREACH grouped_records GENERATE group, MAX(filtered_records.temperature);
DUMP max_temp;
My input tsv file...
1950 32 01459
1951 33 01459
1950 21 01459
1940 24 01459
1950 33 01459
2000 30 01459
2010 44 01459
2014 -10 01459
2016 -20 01459
2011 19 01459
What am I missing?
There is a high chance that the parsing is not working and you are filtering all records.
Try
records = LOAD '/user/hduser/input/maxtemppig.tsv' USING PigStorage('\t') AS (year:chararray, temperature:int, quality:int);
I was running page rank on s3://aws-publicdatasets/common-crawl/parse-output/segment/1346876860819/metadata-XXXX dataset. The program worked when I use 10 files (about 1GB) with 2 m1.medium, but when I use 300 files(20GB) with 5 m3.xlarge instances, it fails at map 39%, reduce 4%. Could you please find the possible reason for the failure?
Here are the logs.
stderr:
AttemptID:attempt_1411372099942_0001_m_000010_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000014_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000015_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000057_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000103_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000094_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000109_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000108_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000133_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000136_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000010_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000151_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000014_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000168_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000167_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000015_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000174_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000175_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000057_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000181_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000182_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000190_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000103_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000109_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000094_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000200_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000108_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000133_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000199_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000136_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000010_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000151_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000206_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000207_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000014_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000168_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000175_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000167_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000174_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000015_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000057_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000181_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000182_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000190_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000103_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000094_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000200_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000109_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000108_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000133_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000199_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000136_2 Timed out after 600 secs
part of syslog:
08:24:24,791 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000168_1, Status : FAILED
2014-09-22 08:24:46,873 INFO org.apache.hadoop.mapreduce.Job (main): map 39% reduce 4%
2014-09-22 08:24:54,903 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000175_1, Status : FAILED
2014-09-22 08:24:54,904 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000167_1, Status : FAILED
2014-09-22 08:24:54,904 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000174_1, Status : FAILED
2014-09-22 08:24:55,908 INFO org.apache.hadoop.mapreduce.Job (main): map 38% reduce 4%
2014-09-22 08:25:13,968 INFO org.apache.hadoop.mapreduce.Job (main): map 39% reduce 4%
2014-09-22 08:25:25,007 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000015_2, Status : FAILED
2014-09-22 08:26:24,210 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000057_2, Status : FAILED
2014-09-22 08:26:54,322 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000181_1, Status : FAILED
2014-09-22 08:27:24,432 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000182_1, Status : FAILED
2014-09-22 08:27:25,435 INFO org.apache.hadoop.mapreduce.Job (main): map 38% reduce 4%
2014-09-22 08:27:54,543 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000190_1, Status : FAILED
2014-09-22 08:28:54,751 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000103_2, Status : FAILED
2014-09-22 08:29:24,851 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000094_2, Status : FAILED
2014-09-22 08:29:24,852 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000200_1, Status : FAILED
2014-09-22 08:29:24,853 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000109_2, Status : FAILED
2014-09-22 08:29:48,931 INFO org.apache.hadoop.mapreduce.Job (main): map 39% reduce 4%
2014-09-22 08:29:54,954 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000108_2, Status : FAILED
2014-09-22 08:30:24,066 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000133_2, Status : FAILED
2014-09-22 08:32:54,599 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000199_1, Status : FAILED
2014-09-22 08:32:54,600 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000136_2, Status : FAILED
2014-09-22 08:34:25,910 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 100%
2014-09-22 08:34:25,915 INFO org.apache.hadoop.mapreduce.Job (main): Job job_1411372099942_0001 failed with state FAILED due to: Task failed task_1411372099942_0001_m_000010
Job failed as tasks failed. failedMaps:1 failedReduces:0
Attempts for: s-1W7C8YIFC87Y8, Job 1411372099942_0001, Task
2014-09-22 08:18:27,238 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:27,322 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:28,462 INFO main org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-09-22 08:18:28,496 INFO main org.apache.hadoop.metrics2.sink.cloudwatch.CloudWatchSink: Initializing the CloudWatchSink for metrics.
2014-09-22 08:18:28,795 INFO main org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink file started
2014-09-22 08:18:28,967 INFO main org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 300 second(s).
2014-09-22 08:18:28,967 INFO main org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2014-09-22 08:18:28,982 INFO main org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2014-09-22 08:18:28,983 INFO main org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1411372099942_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#3fc15856)
2014-09-22 08:18:29,157 INFO main org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2014-09-22 08:18:29,880 INFO main org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /mnt/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411372099942_0001,/mnt1/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411372099942_0001,/mnt2/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411372099942_0001
2014-09-22 08:18:30,164 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:30,182 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:31,063 INFO main org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2014-09-22 08:18:32,100 INFO main org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2014-09-22 08:18:32,605 INFO main org.apache.hadoop.mapred.MapTask: Processing split: s3://aws-publicdatasets/common-crawl/parse-output/segment/1346876860819/metadata-00122:0+67108864
2014-09-22 08:18:32,810 INFO main amazon.emr.metrics.MetricsSaver: MetricsSaver YarnChild root:hdfs:///mnt/var/em/ period:120 instanceId:i-ec84e7c1 jobflow:j-27XODJ8WMW4VP
2014-09-22 08:18:33,205 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:33,219 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:33,221 INFO main com.amazon.ws.emr.hadoop.fs.guice.EmrFSBaseModule: Consistency disabled, using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as FileSystem implementation.
2014-09-22 08:18:35,024 INFO main com.amazon.ws.emr.hadoop.fs.EmrFileSystem: Using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as filesystem implementation
2014-09-22 08:18:36,001 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:36,002 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:36,024 INFO main org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 52428796(209715184)
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 200
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: soft limit at 167772160
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 209715200
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: kvstart = 52428796; length = 13107200
2014-09-22 08:18:36,597 INFO main com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem: Opening 's3://aws-publicdatasets/common-crawl/parse-output/segment/1346876860819/metadata-00122' for reading
2014-09-22 08:18:36,716 INFO main org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
2014-09-22 08:18:36,720 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor ht t p: //. gz
2014-09-22 08:18:36,726 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2014-09-22 08:18:36,726 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2014-09-22 08:18:36,727 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
Edited by: paraxx on Sep 22, 2014 10:25 AM
task_1411372099942_0001_m_000010 has timed out. Try increasing the timeout configuration parameter.
mapreduce.task.timeout=12000000
I've set up an Amazon EMR jobflow with 1 on-demand core node and 4 task nodes with bidding. When I run my task on only the core node each step finishes within 1 hour. When I'm lucky and have 1 core + 4 task nodes then steps usually finish within 10 minutes.
My problem is that when the task nodes are taken away by amazon, the rest of the task attempts start agonizing and they can agonize for 7-10 hours.
As you can see from the logs below, everything went OK from 08:56 (0%) - 09:01 (43%), but then task attempts started to fail. Now according to my calculations based on the fact that running the step on only the 1 core node would take less then an hour, I would expect the step to finish from 43%-100% in less than 1 hour. However it's agonizing for at least another 5+ hours: 09:01 - 14:30. This doesn't look to me normal (not to talk about the time and money wasted). How could I fix this? What can cause this?
2014-05-21 08:55:39,317 INFO com.amazon.ws.emr.hadoop.fs.EmrFileSystem (main): Opening 's3://test/log-parser/code/hadoop-script.sh' for reading
2014-05-21 08:55:45,555 INFO com.amazon.ws.emr.hadoop.fs.EmrFileSystem (main): Opening 's3://test/log-parser/code/hadoop-upload-script.sh' for reading
2014-05-21 08:55:52,990 INFO com.amazon.ws.emr.hadoop.fs.EmrFileSystem (main): Opening 's3://test/log-parser/code/log-parser.jar' for reading
2014-05-21 08:55:59,840 INFO com.innovid.logParser.LogParserMapReduce (main): LogParserMapReduce: waitingForCompletion
2014-05-21 08:56:00,190 INFO org.apache.hadoop.yarn.client.RMProxy (main): Connecting to ResourceManager at /1.1.1.1:9022
2014-05-21 08:56:05,397 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat (main): Total input paths to process : 31
2014-05-21 08:56:05,426 INFO com.hadoop.compression.lzo.GPLNativeCodeLoader (main): Loaded native gpl library
2014-05-21 08:56:05,434 INFO com.hadoop.compression.lzo.LzoCodec (main): Successfully loaded & initialized native-lzo library [hadoop-lzo rev c7d54fffe5a853c437ee23413ba71fc6af23c91d]
2014-05-21 08:56:05,669 INFO org.apache.hadoop.mapreduce.JobSubmitter (main): number of splits:135
2014-05-21 08:56:05,697 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): user.name is deprecated. Instead, use mapreduce.job.user.name
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.jar is deprecated. Instead, use mapreduce.job.jar
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.output.compression.codec is deprecated. Instead, use mapreduce.output.fileoutputformat.compress.codec
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
2014-05-21 08:56:05,698 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
2014-05-21 08:56:05,699 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
2014-05-21 08:56:05,699 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
2014-05-21 08:56:05,699 INFO org.apache.hadoop.conf.Configuration.deprecation (main): keep.failed.task.files is deprecated. Instead, use mapreduce.task.files.preserve.failedtasks
2014-05-21 08:56:05,699 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
2014-05-21 08:56:05,700 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
2014-05-21 08:56:05,700 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.job.name is deprecated. Instead, use mapreduce.job.name
2014-05-21 08:56:05,700 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
2014-05-21 08:56:05,700 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
2014-05-21 08:56:05,700 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
2014-05-21 08:56:05,700 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2014-05-21 08:56:05,700 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
2014-05-21 08:56:05,701 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
2014-05-21 08:56:05,701 INFO org.apache.hadoop.conf.Configuration.deprecation (main): mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
2014-05-21 08:56:06,123 INFO org.apache.hadoop.mapreduce.JobSubmitter (main): Submitting tokens for job: job_1400094353_0186
2014-05-21 08:56:06,779 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl (main): Submitted application application_1400094353_0186 to ResourceManager at /10.42.102.163:9022
2014-05-21 08:56:06,871 INFO org.apache.hadoop.mapreduce.Job (main): The url to track the job: http://1.1.1.1:9046/proxy/application_1400094353_0186/
2014-05-21 08:56:06,872 INFO org.apache.hadoop.mapreduce.Job (main): Running job: job_1400094353_0186
2014-05-21 08:56:15,441 INFO org.apache.hadoop.mapreduce.Job (main): Job job_1400094353_0186 running in uber mode : false
2014-05-21 08:56:15,443 INFO org.apache.hadoop.mapreduce.Job (main): map 0% reduce 0%
2014-05-21 08:57:10,871 INFO org.apache.hadoop.mapreduce.Job (main): map 1% reduce 0%
2014-05-21 08:57:19,928 INFO org.apache.hadoop.mapreduce.Job (main): map 2% reduce 0%
2014-05-21 08:57:25,971 INFO org.apache.hadoop.mapreduce.Job (main): map 3% reduce 0%
2014-05-21 08:57:36,033 INFO org.apache.hadoop.mapreduce.Job (main): map 4% reduce 0%
2014-05-21 08:57:52,141 INFO org.apache.hadoop.mapreduce.Job (main): map 5% reduce 0%
2014-05-21 08:58:00,189 INFO org.apache.hadoop.mapreduce.Job (main): map 6% reduce 0%
2014-05-21 08:58:07,234 INFO org.apache.hadoop.mapreduce.Job (main): map 7% reduce 0%
2014-05-21 08:58:14,285 INFO org.apache.hadoop.mapreduce.Job (main): map 8% reduce 0%
2014-05-21 08:58:20,321 INFO org.apache.hadoop.mapreduce.Job (main): map 9% reduce 0%
2014-05-21 08:58:28,369 INFO org.apache.hadoop.mapreduce.Job (main): map 10% reduce 0%
2014-05-21 08:58:35,421 INFO org.apache.hadoop.mapreduce.Job (main): map 11% reduce 0%
2014-05-21 08:58:44,481 INFO org.apache.hadoop.mapreduce.Job (main): map 12% reduce 0%
2014-05-21 08:58:51,530 INFO org.apache.hadoop.mapreduce.Job (main): map 13% reduce 0%
2014-05-21 08:58:59,583 INFO org.apache.hadoop.mapreduce.Job (main): map 14% reduce 0%
2014-05-21 08:59:06,625 INFO org.apache.hadoop.mapreduce.Job (main): map 15% reduce 0%
2014-05-21 08:59:12,697 INFO org.apache.hadoop.mapreduce.Job (main): map 16% reduce 0%
2014-05-21 08:59:20,766 INFO org.apache.hadoop.mapreduce.Job (main): map 17% reduce 0%
2014-05-21 08:59:26,804 INFO org.apache.hadoop.mapreduce.Job (main): map 18% reduce 0%
2014-05-21 08:59:33,868 INFO org.apache.hadoop.mapreduce.Job (main): map 19% reduce 0%
2014-05-21 08:59:39,907 INFO org.apache.hadoop.mapreduce.Job (main): map 20% reduce 0%
2014-05-21 08:59:46,959 INFO org.apache.hadoop.mapreduce.Job (main): map 21% reduce 0%
2014-05-21 08:59:54,003 INFO org.apache.hadoop.mapreduce.Job (main): map 22% reduce 0%
2014-05-21 09:00:01,052 INFO org.apache.hadoop.mapreduce.Job (main): map 23% reduce 0%
2014-05-21 09:00:08,108 INFO org.apache.hadoop.mapreduce.Job (main): map 24% reduce 0%
2014-05-21 09:00:16,196 INFO org.apache.hadoop.mapreduce.Job (main): map 25% reduce 0%
2014-05-21 09:00:22,241 INFO org.apache.hadoop.mapreduce.Job (main): map 26% reduce 0%
2014-05-21 09:00:29,288 INFO org.apache.hadoop.mapreduce.Job (main): map 27% reduce 0%
2014-05-21 09:00:36,328 INFO org.apache.hadoop.mapreduce.Job (main): map 28% reduce 0%
2014-05-21 09:00:43,410 INFO org.apache.hadoop.mapreduce.Job (main): map 29% reduce 0%
2014-05-21 09:01:22,288 INFO org.apache.hadoop.mapreduce.Job (main): map 30% reduce 0%
2014-05-21 09:01:26,318 INFO org.apache.hadoop.mapreduce.Job (main): map 31% reduce 0%
2014-05-21 09:01:29,334 INFO org.apache.hadoop.mapreduce.Job (main): map 32% reduce 0%
2014-05-21 09:01:32,351 INFO org.apache.hadoop.mapreduce.Job (main): map 33% reduce 0%
2014-05-21 09:01:35,368 INFO org.apache.hadoop.mapreduce.Job (main): map 34% reduce 0%
2014-05-21 09:01:38,384 INFO org.apache.hadoop.mapreduce.Job (main): map 35% reduce 0%
2014-05-21 09:01:40,395 INFO org.apache.hadoop.mapreduce.Job (main): map 36% reduce 0%
2014-05-21 09:01:41,401 INFO org.apache.hadoop.mapreduce.Job (main): map 37% reduce 0%
2014-05-21 09:01:43,416 INFO org.apache.hadoop.mapreduce.Job (main): map 38% reduce 0%
2014-05-21 09:01:45,428 INFO org.apache.hadoop.mapreduce.Job (main): map 39% reduce 0%
2014-05-21 09:01:47,439 INFO org.apache.hadoop.mapreduce.Job (main): map 40% reduce 0%
2014-05-21 09:01:49,451 INFO org.apache.hadoop.mapreduce.Job (main): map 41% reduce 0%
2014-05-21 09:01:51,474 INFO org.apache.hadoop.mapreduce.Job (main): map 42% reduce 0%
2014-05-21 09:01:54,491 INFO org.apache.hadoop.mapreduce.Job (main): map 43% reduce 0%
2014-05-21 09:06:44,031 INFO org.apache.hadoop.mapreduce.Job (main): map 61% reduce 0%
2014-05-21 09:07:44,353 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 09:22:04,475 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000106_0, Status : FAILED
2014-05-21 09:22:05,499 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 09:32:10,244 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 09:37:25,611 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000079_0, Status : FAILED
2014-05-21 09:37:25,613 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000094_0, Status : FAILED
2014-05-21 09:37:25,614 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000107_0, Status : FAILED
2014-05-21 09:37:25,615 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000088_0, Status : FAILED
2014-05-21 09:37:26,620 INFO org.apache.hadoop.mapreduce.Job (main): map 97% reduce 0%
2014-05-21 09:48:50,521 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 09:52:45,491 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000081_0, Status : FAILED
2014-05-21 09:52:46,495 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 10:05:30,717 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 10:08:06,373 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000090_0, Status : FAILED
2014-05-21 10:08:07,377 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 10:18:50,064 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 10:23:28,146 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000110_0, Status : FAILED
2014-05-21 10:23:29,150 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 10:35:29,790 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 10:38:48,507 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000104_0, Status : FAILED
2014-05-21 10:38:49,511 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 10:52:10,333 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 10:54:09,754 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000108_0, Status : FAILED
2014-05-21 10:54:10,758 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 11:05:30,115 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 11:09:29,958 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000095_0, Status : FAILED
2014-05-21 11:09:30,961 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 11:22:10,617 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 11:24:51,159 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000084_0, Status : FAILED
2014-05-21 11:24:51,160 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000116_0, Status : FAILED
2014-05-21 11:24:52,163 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 11:35:29,377 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 11:40:12,354 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000078_0, Status : FAILED
2014-05-21 11:40:13,358 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 11:52:09,812 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 11:55:32,516 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000113_0, Status : FAILED
2014-05-21 11:55:33,520 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 12:08:50,313 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 12:10:53,733 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000093_0, Status : FAILED
2014-05-21 12:10:54,739 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 12:22:10,023 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 12:26:14,825 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000097_0, Status : FAILED
2014-05-21 12:26:15,828 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 12:38:50,340 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 12:41:35,872 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000114_0, Status : FAILED
2014-05-21 12:41:36,876 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 12:41:45,906 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000049_0, Status : FAILED
2014-05-21 12:41:55,938 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000053_0, Status : FAILED
2014-05-21 12:41:56,942 INFO org.apache.hadoop.mapreduce.Job (main): map 98% reduce 0%
2014-05-21 12:42:05,974 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000041_0, Status : FAILED
2014-05-21 12:42:06,977 INFO org.apache.hadoop.mapreduce.Job (main): map 97% reduce 0%
2014-05-21 12:42:16,009 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000062_0, Status : FAILED
2014-05-21 12:42:17,013 INFO org.apache.hadoop.mapreduce.Job (main): map 96% reduce 0%
2014-05-21 12:42:26,043 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000045_0, Status : FAILED
2014-05-21 12:42:36,077 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000050_0, Status : FAILED
2014-05-21 12:42:37,080 INFO org.apache.hadoop.mapreduce.Job (main): map 95% reduce 0%
2014-05-21 12:52:09,942 INFO org.apache.hadoop.mapreduce.Job (main): map 98% reduce 0%
2014-05-21 12:55:29,568 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 12:56:56,841 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000109_0, Status : FAILED
2014-05-21 12:56:56,842 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000099_0, Status : FAILED
2014-05-21 12:56:57,845 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 12:57:06,876 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000044_0, Status : FAILED
2014-05-21 12:57:07,879 INFO org.apache.hadoop.mapreduce.Job (main): map 98% reduce 0%
2014-05-21 12:57:16,909 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000072_0, Status : FAILED
2014-05-21 12:57:17,912 INFO org.apache.hadoop.mapreduce.Job (main): map 97% reduce 0%
2014-05-21 12:57:25,974 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000057_0, Status : FAILED
2014-05-21 12:57:26,977 INFO org.apache.hadoop.mapreduce.Job (main): map 96% reduce 0%
2014-05-21 12:57:36,006 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000056_0, Status : FAILED
2014-05-21 12:57:46,038 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000069_0, Status : FAILED
2014-05-21 12:57:47,041 INFO org.apache.hadoop.mapreduce.Job (main): map 95% reduce 0%
2014-05-21 12:57:56,069 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000076_0, Status : FAILED
2014-05-21 12:57:57,072 INFO org.apache.hadoop.mapreduce.Job (main): map 94% reduce 0%
2014-05-21 12:58:06,102 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000042_0, Status : FAILED
2014-05-21 12:58:07,105 INFO org.apache.hadoop.mapreduce.Job (main): map 93% reduce 0%
2014-05-21 12:58:16,134 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000071_0, Status : FAILED
2014-05-21 12:58:26,166 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000051_0, Status : FAILED
2014-05-21 12:58:27,171 INFO org.apache.hadoop.mapreduce.Job (main): map 92% reduce 0%
2014-05-21 12:58:36,207 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000039_0, Status : FAILED
2014-05-21 12:58:37,210 INFO org.apache.hadoop.mapreduce.Job (main): map 91% reduce 0%
2014-05-21 12:58:46,240 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000060_0, Status : FAILED
2014-05-21 12:58:47,243 INFO org.apache.hadoop.mapreduce.Job (main): map 90% reduce 0%
2014-05-21 12:58:56,271 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000061_0, Status : FAILED
2014-05-21 13:08:50,122 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 13:12:09,752 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 13:12:17,779 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000091_0, Status : FAILED
2014-05-21 13:12:17,780 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000083_0, Status : FAILED
2014-05-21 13:12:17,781 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000115_0, Status : FAILED
2014-05-21 13:12:18,784 INFO org.apache.hadoop.mapreduce.Job (main): map 98% reduce 0%
2014-05-21 13:12:27,813 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000070_0, Status : FAILED
2014-05-21 13:12:28,816 INFO org.apache.hadoop.mapreduce.Job (main): map 97% reduce 0%
2014-05-21 13:25:30,288 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 13:27:37,695 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000085_0, Status : FAILED
2014-05-21 13:27:38,698 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 13:27:47,729 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000058_0, Status : FAILED
2014-05-21 13:27:57,762 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000063_0, Status : FAILED
2014-05-21 13:27:58,765 INFO org.apache.hadoop.mapreduce.Job (main): map 98% reduce 0%
2014-05-21 13:38:50,829 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 13:42:58,603 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000086_0, Status : FAILED
2014-05-21 13:42:59,606 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 13:43:08,636 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000077_0, Status : FAILED
2014-05-21 13:55:30,006 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 13:58:19,532 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000098_0, Status : FAILED
2014-05-21 13:58:20,535 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 14:08:49,521 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 14:13:40,429 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000101_0, Status : FAILED
2014-05-21 14:13:41,433 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 14:13:50,462 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000059_0, Status : FAILED
2014-05-21 14:25:29,656 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 14:29:01,327 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000103_0, Status : FAILED
2014-05-21 14:29:02,330 INFO org.apache.hadoop.mapreduce.Job (main): map 99% reduce 0%
2014-05-21 14:29:11,361 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000067_0, Status : FAILED
2014-05-21 14:29:21,394 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000075_0, Status : FAILED
2014-05-21 14:29:22,397 INFO org.apache.hadoop.mapreduce.Job (main): map 98% reduce 0%
2014-05-21 14:29:31,427 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000046_0, Status : FAILED
2014-05-21 14:29:32,430 INFO org.apache.hadoop.mapreduce.Job (main): map 97% reduce 0%
2014-05-21 14:29:41,458 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000047_0, Status : FAILED
2014-05-21 14:29:42,461 INFO org.apache.hadoop.mapreduce.Job (main): map 96% reduce 0%
2014-05-21 14:29:51,491 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000054_0, Status : FAILED
2014-05-21 14:30:01,550 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000068_0, Status : FAILED
2014-05-21 14:30:02,554 INFO org.apache.hadoop.mapreduce.Job (main): map 95% reduce 0%
2014-05-21 14:30:11,591 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000048_0, Status : FAILED
2014-05-21 14:30:12,594 INFO org.apache.hadoop.mapreduce.Job (main): map 94% reduce 0%
2014-05-21 14:30:21,626 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000038_0, Status : FAILED
2014-05-21 14:30:22,632 INFO org.apache.hadoop.mapreduce.Job (main): map 93% reduce 0%
2014-05-21 14:30:31,660 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000052_0, Status : FAILED
2014-05-21 14:30:41,691 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000043_0, Status : FAILED
2014-05-21 14:30:42,694 INFO org.apache.hadoop.mapreduce.Job (main): map 92% reduce 0%
2014-05-21 14:30:51,724 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000065_0, Status : FAILED
2014-05-21 14:30:52,727 INFO org.apache.hadoop.mapreduce.Job (main): map 91% reduce 0%
2014-05-21 14:42:09,877 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 0%
2014-05-21 14:44:22,301 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000082_0, Status : FAILED
2014-05-21 14:44:22,302 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000080_0, Status : FAILED
2014-05-21 14:44:22,303 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000102_0, Status : FAILED
2014-05-21 14:44:22,304 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000096_0, Status : FAILED
2014-05-21 14:44:23,307 INFO org.apache.hadoop.mapreduce.Job (main): map 97% reduce 0%
2014-05-21 14:44:32,338 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000055_0, Status : FAILED
2014-05-21 14:44:33,341 INFO org.apache.hadoop.mapreduce.Job (main): map 96% reduce 0%
2014-05-21 14:44:42,371 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000073_0, Status : FAILED
2014-05-21 14:44:52,405 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000074_0, Status : FAILED
2014-05-21 14:44:53,408 INFO org.apache.hadoop.mapreduce.Job (main): map 95% reduce 0%
2014-05-21 14:45:02,441 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000064_0, Status : FAILED
2014-05-21 14:45:03,446 INFO org.apache.hadoop.mapreduce.Job (main): map 88% reduce 0%
2014-05-21 14:45:12,480 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000066_0, Status : FAILED
2014-05-21 14:45:13,486 INFO org.apache.hadoop.mapreduce.Job (main): map 87% reduce 0%
2014-05-21 14:45:22,520 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1400503094353_0186_m_000040_0, Status : FAILED