hadoop streaming failed with error code 1 in rstudio-server - hadoop

I use single node.
I installed rmr2 and hdfs in sudo R
I wrote some codes in rstudio-server.
But, it occurred error.
I don`t know what's wrong.
Thanks for reading.
If someone help me, I will appreciate you.
> library("rmr2", lib.loc="/usr/lib64/R/library")
Please review your hadoop settings. See help(hadoop.settings)
> library("rhdfs", lib.loc="/usr/lib64/R/library")
Loading required package: rJava
HADOOP_CMD=/home/knu/hadoop/hadoop-2.7.3/bin/hadoop
Be sure to run hdfs.init()
> hdfs.init()
17/04/12 21:39:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> detach("package:rmr2", unload=TRUE)
> library("rmr2", lib.loc="/usr/lib64/R/library")
Please review your hadoop settings. See help(hadoop.settings)
> small.ints <- to.dfs(1:10)
17/04/12 21:41:07 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
17/04/12 21:41:07 INFO compress.CodecPool: Got brand-new compressor [.deflate]
> from.dfs(small.ints)
17/04/12 21:41:25 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
17/04/12 21:41:25 INFO compress.CodecPool: Got brand-new decompressor [.deflate]
$key
NULL
$val
[1] 1 2 3 4 5 6 7 8 9 10
> result <- mapreduce(input = small.ints,
map = function(k,v) cbind(v,v^2) )
Execution Log:
packageJobJar: [/tmp/hadoop-unjar5646463829062726981/] [] /tmp/streamjob3604004992263150530.jar tmpDir=null
17/04/12 21:41:38 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
17/04/12 21:41:38 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
17/04/12 21:41:39 INFO mapred.FileInputFormat: Total input paths to process : 1
17/04/12 21:41:39 INFO mapreduce.JobSubmitter: number of splits:2
17/04/12 21:41:39 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
17/04/12 21:41:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1491995822063_0001
17/04/12 21:41:40 INFO impl.YarnClientImpl: Submitted application application_1491995822063_0001
17/04/12 21:41:40 INFO mapreduce.Job: The url to track the job: http://0.0.0.0:8089/proxy/application_1491995822063_0001/
17/04/12 21:41:40 INFO mapreduce.Job: Running job: job_1491995822063_0001
17/04/12 21:41:54 INFO mapreduce.Job: Job job_1491995822063_0001 running in uber mode : false
17/04/12 21:41:54 INFO mapreduce.Job: map 0% reduce 0%
17/04/12 21:42:01 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000001_0, Status : FAILED
Container [pid=12055,containerID=container_1491995822063_0001_01_000003] is running beyond virtual memory limits. Current usage: 109.6 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12073 12055 12055 12055 (java) 193 8 2153918464 27756 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_0 3
|- 12055 12053 12055 12055 (bash) 0 0 115847168 303 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_0 3 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000003/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:01 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000000_0, Status : FAILED
Container [pid=12054,containerID=container_1491995822063_0001_01_000002] is running beyond virtual memory limits. Current usage: 107.3 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000002 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12072 12054 12054 12054 (java) 204 9 2154971136 27169 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_0 2
|- 12054 12052 12054 12054 (bash) 0 0 115847168 302 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_0 2 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000002/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:07 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000001_1, Status : FAILED
Container [pid=12143,containerID=container_1491995822063_0001_01_000004] is running beyond virtual memory limits. Current usage: 105.0 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000004 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12151 12143 12143 12143 (java) 177 5 2153918464 26571 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_1 4
|- 12143 12142 12143 12143 (bash) 0 0 115847168 303 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_1 4 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000004/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:10 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000000_1, Status : FAILED
Container [pid=12164,containerID=container_1491995822063_0001_01_000005] is running beyond virtual memory limits. Current usage: 130.7 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000005 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12172 12164 12164 12164 (java) 293 9 2175307776 32535 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_1 5
|- 12164 12163 12164 12164 (bash) 0 0 115847168 302 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_1 5 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000005/stderr
|- 12224 12172 12164 12164 (R) 0 0 116293632 469 /bin/sh /usr/lib64/R/bin/R --slave --no-restore --vanilla --file=./rmr-streaming-map2d5212712621
|- 12228 12224 12164 12164 (R) 0 0 116293632 158 /bin/sh /usr/lib64/R/bin/R --slave --no-restore --vanilla --file=./rmr-streaming-map2d5212712621
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:16 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000001_2, Status : FAILED
Container [pid=12254,containerID=container_1491995822063_0001_01_000007] is running beyond virtual memory limits. Current usage: 160.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000007 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12319 12262 12254 12254 (R) 10 2 268447744 8256 /usr/lib64/R/bin/exec/R --slave --no-restore --vanilla --file=./rmr-streaming-map2d5212712621
|- 12262 12254 12254 12254 (java) 295 9 2175307776 32549 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_2 7
|- 12254 12253 12254 12254 (bash) 0 0 115847168 302 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000007/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000001_2 7 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000007/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:16 INFO mapreduce.Job: Task Id : attempt_1491995822063_0001_m_000000_2, Status : FAILED
Container [pid=12280,containerID=container_1491995822063_0001_01_000008] is running beyond virtual memory limits. Current usage: 92.1 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1491995822063_0001_01_000008 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12280 12279 12280 12280 (bash) 0 0 115847168 303 /bin/bash -c /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_2 8 1>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008/stdout 2>/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008/stderr
|- 12289 12280 12280 12280 (java) 137 5 2142965760 23280 /home/knu/hadoop/jdk1.8.0_121/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx400M -Djava.io.tmpdir=/home/knu/hadoop/hadoop-2.7.3/data/yarn/nm-local-dir/usercache/knu/appcache/application_1491995822063_0001/container_1491995822063_0001_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/knu/hadoop/hadoop-2.7.3/logs/userlogs/application_1491995822063_0001/container_1491995822063_0001_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 39313 attempt_1491995822063_0001_m_000000_2 8
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/12 21:42:23 INFO mapreduce.Job: map 100% reduce 0%
17/04/12 21:42:23 INFO mapreduce.Job: Job job_1491995822063_0001 failed with state FAILED due to: Task failed task_1491995822063_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
17/04/12 21:42:23 INFO mapreduce.Job: Counters: 13
Job Counters
Failed map tasks=7
Killed map tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=39157
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=39157
Total vcore-milliseconds taken by all map tasks=39157
Total megabyte-milliseconds taken by all map tasks=40096768
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
17/04/12 21:42:23 ERROR streaming.StreamJob: Job not successful!
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1

Related

Unable to start the MGT Development Environment

I'm trying to setup the MGT Development Environment as per the instructions on the site. I'm running Ubuntu 16.04 and native docker.
I did a fresh pull before trying any of this. After running the container the browser at 127.0.0.1:3333 just shows the general HTTP 500 error. Running docker logs on the container shows the following log entries:
docker logs 7b1f04c29bf2
/usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-03-28 14:03:53,908 CRIT Supervisor running as root (no user in config file)
2017-03-28 14:03:53,908 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2017-03-28 14:03:53,916 INFO RPC interface 'supervisor' initialized
2017-03-28 14:03:53,917 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2017-03-28 14:03:53,917 INFO supervisord started with pid 1
2017-03-28 14:03:54,919 INFO spawned: 'sshd' with pid 9
2017-03-28 14:03:54,920 INFO spawned: 'postfix' with pid 10
2017-03-28 14:03:54,922 INFO spawned: 'php-fpm' with pid 11
2017-03-28 14:03:54,928 INFO spawned: 'redis' with pid 13
2017-03-28 14:03:54,930 INFO spawned: 'varnish' with pid 16
2017-03-28 14:03:54,932 INFO spawned: 'cron' with pid 18
2017-03-28 14:03:54,934 INFO spawned: 'nginx' with pid 19
2017-03-28 14:03:54,935 INFO spawned: 'clp-server' with pid 20
2017-03-28 14:03:54,937 INFO spawned: 'clp5-fpm' with pid 23
2017-03-28 14:03:54,938 INFO spawned: 'mysql' with pid 24
2017-03-28 14:03:54,940 INFO spawned: 'memcached' with pid 26
2017-03-28 14:03:54,940 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:54,941 INFO success: postfix entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2017-03-28 14:03:55,011 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:55,102 INFO exited: postfix (exit status 0; expected)
2017-03-28 14:03:55,255 INFO exited: varnish (exit status 0; not expected)
2017-03-28 14:03:56,256 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,257 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,259 INFO spawned: 'redis' with pid 382
2017-03-28 14:03:56,262 INFO spawned: 'varnish' with pid 383
2017-03-28 14:03:56,263 INFO success: cron entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: clp-server entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,263 INFO success: clp5-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,266 INFO spawned: 'mysql' with pid 384
2017-03-28 14:03:56,266 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-03-28 14:03:56,279 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:56,279 CRIT reaped unknown pid 385)
2017-03-28 14:03:56,306 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:56,585 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:03:58,588 INFO spawned: 'redis' with pid 396
2017-03-28 14:03:58,589 INFO spawned: 'varnish' with pid 397
2017-03-28 14:03:58,590 INFO spawned: 'mysql' with pid 398
2017-03-28 14:03:58,599 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:03:58,605 CRIT reaped unknown pid 399)
2017-03-28 14:03:58,632 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:03:58,913 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:04:01,919 INFO spawned: 'redis' with pid 410
2017-03-28 14:04:01,921 INFO spawned: 'varnish' with pid 411
2017-03-28 14:04:01,923 INFO spawned: 'mysql' with pid 412
2017-03-28 14:04:01,930 INFO exited: redis (exit status 0; not expected)
2017-03-28 14:04:01,930 INFO gave up: redis entered FATAL state, too many start retries too quickly
2017-03-28 14:04:01,930 CRIT reaped unknown pid 413)
2017-03-28 14:04:01,969 INFO exited: mysql (exit status 0; not expected)
2017-03-28 14:04:02,238 INFO gave up: mysql entered FATAL state, too many start retries too quickly
2017-03-28 14:04:02,238 INFO exited: varnish (exit status 2; not expected)
2017-03-28 14:04:03,240 INFO gave up: varnish entered FATAL state, too many start retries too quickly
If I logon to the container via docker exec -it bash it shows the following running process:
root#mgt-dev-70:/# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 48144 16348 ? Ss+ 14:03 0:00 /usr/bin/python /usr/bin/supervisord
root 9 0.0 0.0 55600 5268 ? S 14:03 0:00 /usr/sbin/sshd -D
root 11 0.0 0.3 819816 49984 ? S 14:03 0:00 php-fpm: master process (/etc/php/7.0/fpm/php-fpm.conf)
root 18 0.0 0.0 25904 2236 ? S 14:03 0:00 /usr/sbin/cron -f
root 19 0.0 0.1 64660 23456 ? S 14:03 0:00 nginx: master process /usr/sbin/nginx -g daemon off;
root 20 0.0 0.0 93752 8432 ? S 14:03 0:00 nginx: master process /usr/sbin/clp-server -g daemon off;
root 23 0.0 0.2 854428 39528 ? S 14:03 0:00 php-fpm: master process (/etc/clp5/fpm/php-fpm.conf)
root 25 0.1 0.0 37256 8876 ? Ssl 14:03 0:00 /usr/bin/redis-server 127.0.0.1:6379
memcache 26 0.0 0.0 327452 2724 ? Sl 14:03 0:00 /usr/bin/memcached -p 11211 -u memcache -m 256 -c 1024
root 40 0.0 0.1 65564 21516 ? S 14:03 0:00 nginx: worker process
root 102 0.0 0.0 94588 4304 ? S 14:03 0:00 nginx: worker process
root 156 0.0 0.0 36620 3948 ? Ss 14:03 0:00 /usr/lib/postfix/master
postfix 157 0.0 0.0 38684 3780 ? S 14:03 0:00 pickup -l -t unix -u -c
postfix 158 0.0 0.0 38732 3892 ? S 14:03 0:00 qmgr -l -t unix -u
varnish 164 0.0 0.0 126924 7172 ? Ss 14:03 0:00 /usr/sbin/varnishd -a :6081 -T :6082 -f /etc/varnish/default.vcl -s malloc,256m
vcache 165 0.0 0.7 314848 123484 ? Sl 14:03 0:00 /usr/sbin/varnishd -a :6081 -T :6082 -f /etc/varnish/default.vcl -s malloc,256m
root 495 0.0 0.0 20244 2984 ? Ss 14:12 0:00 bash
root 501 0.0 0.0 17500 2036 ? R+ 14:12 0:00 ps -aux
That's really as much as I know. Any guideance on getting it progressed appreciated as it looks great as a quick and easy way to get going on Magento 2. Thanks.

Hadoop 2.6.4 MR job quick freeze

Hadoop 2.6.4: 1 master + 2 slaves on AWS EC2
master: namenode, secondary namenode, resource manager
slave: datanode, node manager
When running a test MR job (wordcount), it freezes right away:
hduser#ip-172-31-4-108:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /data/shakespeare /data/out1
16/03/21 10:45:19 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-4-108/172.31.4.108:8032
16/03/21 10:45:21 INFO input.FileInputFormat: Total input paths to process : 5
16/03/21 10:45:21 INFO mapreduce.JobSubmitter: number of splits:5
16/03/21 10:45:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1458556970596_0001
16/03/21 10:45:22 INFO impl.YarnClientImpl: Submitted application application_1458556970596_0001
16/03/21 10:45:22 INFO mapreduce.Job: The url to track the job: http://ip-172-31-4-108:8088/proxy/application_1458556970596_0001/
16/03/21 10:45:22 INFO mapreduce.Job: Running job: job_1458556970596_0001
When running start-dfs.sh and start-yarn.sh on master, all daemons run succesfully (jps command) on corresponding EC2 instance.
Below Resource Manager log when launching MR job:
2016-03-21 10:45:20,152 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
2016-03-21 10:45:22,784 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user hduser
2016-03-21 10:45:22,785 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1458556970596_0001
2016-03-21 10:45:22,787 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hduser IP=172.31.4.108 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1458556970596_0001
2016-03-21 10:45:22,788 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from NEW to NEW_SAVING
2016-03-21 10:45:22,805 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1458556970596_0001
2016-03-21 10:45:22,807 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from NEW_SAVING to SUBMITTED
2016-03-21 10:45:22,809 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1458556970596_0001 user: hduser leaf-queue of parent: root #applications: 1
2016-03-21 10:45:22,810 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1458556970596_0001 from user: hduser, in queue: default
2016-03-21 10:45:22,825 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458556970596_0001 State change from SUBMITTED to ACCEPTED
2016-03-21 10:45:22,866 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1458556970596_0001_000001
2016-03-21 10:45:22,867 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458556970596_0001_000001 State change from NEW to SUBMITTED
2016-03-21 10:45:22,896 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start
2016-03-21 10:45:22,896 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start
2016-03-21 10:45:22,897 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1458556970596_0001 from user: hduser activated in queue: default
2016-03-21 10:45:22,898 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1458556970596_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User#1d51055, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-03-21 10:45:22,898 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1458556970596_0001_000001 to scheduler from user hduser in queue default
2016-03-21 10:45:22,900 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458556970596_0001_000001 State change from SUBMITTED to SCHEDULED
Below NameNode log when launching MR job:
2016-03-21 10:45:03,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2016-03-21 10:45:03,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:45:20,613 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 7
2016-03-21 10:45:20,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar. BP-1804768821-172.31.4.108-1458553823105 blk_1073741834_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:21,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* checkFileProgress: blk_1073741834_1010{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} has not reached minimal replication 1
2016-03-21 10:45:21,292 INFO org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream: Nothing to flush
2016-03-21 10:45:21,297 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741834_1010{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 270356
2016-03-21 10:45:21,297 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741834_1010 size 270356
2016-03-21 10:45:21,706 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:21,714 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 2 to 10 for /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.jar
2016-03-21 10:45:21,812 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 2 to 10 for /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split
2016-03-21 10:45:21,823 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split. BP-1804768821-172.31.4.108-1458553823105 blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]}
2016-03-21 10:45:21,849 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]} size 0
2016-03-21 10:45:21,853 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741835_1011{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW]]} size 0
2016-03-21 10:45:21,855 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.split is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:21,865 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.splitmetainfo. BP-1804768821-172.31.4.108-1458553823105 blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:21,876 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:21,877 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:21,880 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:22,277 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.xml. BP-1804768821-172.31.4.108-1458553823105 blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]}
2016-03-21 10:45:22,327 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.14.198:50010 is added to blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:22,328 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.31.13.117:50010 is added to blk_1073741837_1013{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5c350bcc-f752-43cd-80c1-80f68e2db73e:NORMAL:172.31.13.117:50010|RBW], ReplicaUnderConstruction[[DISK]DS-a1e2988f-2ef7-4005-8129-0ca18c95b2cb:NORMAL:172.31.14.198:50010|RBW]]} size 0
2016-03-21 10:45:22,332 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/hduser/.staging/job_1458556970596_0001/job.xml is closed by DFSClient_NONMAPREDUCE_-18612056_1
2016-03-21 10:45:33,746 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:45:33,747 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:46:03,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:46:03,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:46:33,748 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-21 10:46:33,749 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-21 10:47:03,749 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2016-03-21 10:47:03,750 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
Any ideas ? thank you in advance for your support !.
Below *-site.xml files content. Note: I've indeed applied some dimensioning results values to properties, but I still had the EXACT SAME issue with minimal configuration (only mandatory properties).
core-site.xml
<configuration>
<property><name>fs.defaultFS</name><value>hdfs://ip-172-31-4-108:8020</value></property>
</configuration>
hdfs-site.xml
<configuration>
<property><name>dfs.replication</name><value>2</value></property>
<property><name>dfs.namenode.name.dir</name><value>file:///xvda1/dfs/nn</value></property>
<property><name>dfs.datanode.data.dir</name><value>file:///xvda1/dfs/dn</value></property>
</configuration>
mapred-site.xml
<configuration>
<property><name>mapreduce.jobhistory.address</name><value>ip-172-31-4-108:10020</value></property>
<property><name>mapreduce.jobhistory.webapp.address</name><value>ip-172-31-4-108:19888</value></property>
<property><name>mapreduce.framework.name</name><value>yarn</value></property>
<property><name>mapreduce.map.memory.mb</name><value>512</value></property>
<property><name>mapreduce.reduce.memory.mb</name><value>1024</value></property>
<property><name>mapreduce.map.java.opts</name><value>410</value></property>
<property><name>mapreduce.reduce.java.opts</name><value>820</value></property>
</configuration>
yarn-site.xml
<configuration>
<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>
<property><name>yarn.resourcemanager.hostname</name><value>ip-172-31-4-108</value></property>
<property><name>yarn.nodemanager.local-dirs</name><value>file:///xvda1/nodemgr/local</value></property>
<property><name>yarn.nodemanager.log-dirs</name><value>/var/log/hadoop-yarn/containers</value></property>
<property><name>yarn.nodemanager.remote-app-log-dir</name><value>/var/log/hadoop-yarn/apps</value></property>
<property><name>yarn.log-aggregation-enable</name><value>true</value></property>
<property><name>yarn.app.mapreduce.am.resource.mb</name><value>1024</value></property>
<property><name>yarn.app.mapreduce.am.command-opts</name><value>820</value></property>
<property><name>yarn.nodemanager.resource.memory-mb</name><value>6291456</value></property>
<property><name>yarn.scheduler.minimum_allocation-mb</name><value>524288</value></property>
<property><name>yarn.scheduler.maximum_allocation-mb</name><value>6291456</value></property>
</configuration>

PIG latin - DUMP command not displaying

I am just trying to display the result of GROUPed records using DUMP, but instead of displaying the data, there are lots of log data. I am just playing with 10 records.
The details:
grunt> DUMP grouped_records;
2016-02-21 17:34:24,338 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY,FILTER
2016-02-21 17:34:24,339 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, PartitionFilterOptimizer]}
2016-02-21 17:34:24,354 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2016-02-21 17:34:24,374 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2016-02-21 17:34:24,374 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2016-02-21 17:34:24,434 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2016-02-21 17:34:24,440 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2016-02-21 17:34:24,527 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2016-02-21 17:34:24,530 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Reduce phase detected, estimating # of required reducers.
2016-02-21 17:34:24,534 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2016-02-21 17:34:24,541 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=142
2016-02-21 17:34:24,541 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2016-02-21 17:34:25,128 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job662989067023626482.jar
2016-02-21 17:34:31,290 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job662989067023626482.jar created
2016-02-21 17:34:31,335 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2016-02-21 17:34:31,338 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2016-02-21 17:34:31,338 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
2016-02-21 17:34:31,338 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
2016-02-21 17:34:31,549 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2016-02-21 17:34:31,550 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-02-21 17:34:31,556 [JobControl] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
2016-02-21 17:34:31,607 [JobControl] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2016-02-21 17:34:31,918 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2016-02-21 17:34:31,918 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2016-02-21 17:34:31,921 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2016-02-21 17:34:31,979 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2016-02-21 17:34:32,092 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1454294818944_0034
2016-02-21 17:34:32,192 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1454294818944_0034
2016-02-21 17:34:32,198 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://quickstart.cloudera:8088/proxy/application_1454294818944_0034/
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1454294818944_0034
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases filtered_records,grouped_records,records
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: records[1,10],records[-1,-1],filtered_records[2,19],grouped_records[3,18] C: R:
2016-02-21 17:34:32,198 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://localhost:50030/jobdetails.jsp?jobid=job_1454294818944_0034
2016-02-21 17:34:32,428 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2016-02-21 17:35:02,623 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
2016-02-21 17:35:23,469 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2016-02-21 17:35:23,470 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.6.0-cdh5.5.0 0.12.0-cdh5.5.0 cloudera 2016-02-21 17:34:24 2016-02-21 17:35:23 GROUP_BY,FILTER
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1454294818944_0034 1 1 12 12 12 12 16 16 16 16 filtered_records,grouped_records,records GROUP_BY hdfs://quickstart.cloudera:8020/tmp/temp-1703423271/tmp-988597361,
Input(s):
Successfully read 10 records (525 bytes) from: "/user/hduser/input/maxtemppig.tsv"
Output(s):
Successfully stored 0 records in: "hdfs://quickstart.cloudera:8020/tmp/temp-1703423271/tmp-988597361"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1454294818944_0034
2016-02-21 17:35:23,646 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2016-02-21 17:35:23,648 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2016-02-21 17:35:23,648 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-02-21 17:35:23,649 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2016-02-21 17:35:23,660 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2016-02-21 17:35:23,660 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
Commands that I tried:
records = LOAD '/user/hduser/input/maxtemppig.tsv' AS (year:chararray, temperature:int, quality:int);
filtered_records = FILTER records BY temperature IN (-10,19) AND quality IN (0,1,4,5,9);
DUMP filtered_records;
grouped_records = GROUP filtered_records BY year;
DUMP grouped_records;
max_temp = FOREACH grouped_records GENERATE group, MAX(filtered_records.temperature);
DUMP max_temp;
My input tsv file...
1950 32 01459
1951 33 01459
1950 21 01459
1940 24 01459
1950 33 01459
2000 30 01459
2010 44 01459
2014 -10 01459
2016 -20 01459
2011 19 01459
What am I missing?
There is a high chance that the parsing is not working and you are filtering all records.
Try
records = LOAD '/user/hduser/input/maxtemppig.tsv' USING PigStorage('\t') AS (year:chararray, temperature:int, quality:int);

Some tasks in map() fails when I run it on AWS

I was running page rank on s3://aws-publicdatasets/common-crawl/parse-output/segment/1346876860819/metadata-XXXX dataset. The program worked when I use 10 files (about 1GB) with 2 m1.medium, but when I use 300 files(20GB) with 5 m3.xlarge instances, it fails at map 39%, reduce 4%. Could you please find the possible reason for the failure?
Here are the logs.
stderr:
AttemptID:attempt_1411372099942_0001_m_000010_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000014_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000015_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000057_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000103_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000094_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000109_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000108_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000133_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000136_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000010_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000151_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000014_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000168_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000167_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000015_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000174_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000175_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000057_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000181_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000182_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000190_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000103_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000109_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000094_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000200_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000108_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000133_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000199_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000136_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000010_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000151_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000206_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000207_0 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000014_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000168_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000175_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000167_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000174_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000015_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000057_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000181_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000182_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000190_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000103_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000094_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000200_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000109_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000108_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000133_2 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000199_1 Timed out after 600 secs
AttemptID:attempt_1411372099942_0001_m_000136_2 Timed out after 600 secs
part of syslog:
08:24:24,791 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000168_1, Status : FAILED
2014-09-22 08:24:46,873 INFO org.apache.hadoop.mapreduce.Job (main): map 39% reduce 4%
2014-09-22 08:24:54,903 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000175_1, Status : FAILED
2014-09-22 08:24:54,904 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000167_1, Status : FAILED
2014-09-22 08:24:54,904 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000174_1, Status : FAILED
2014-09-22 08:24:55,908 INFO org.apache.hadoop.mapreduce.Job (main): map 38% reduce 4%
2014-09-22 08:25:13,968 INFO org.apache.hadoop.mapreduce.Job (main): map 39% reduce 4%
2014-09-22 08:25:25,007 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000015_2, Status : FAILED
2014-09-22 08:26:24,210 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000057_2, Status : FAILED
2014-09-22 08:26:54,322 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000181_1, Status : FAILED
2014-09-22 08:27:24,432 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000182_1, Status : FAILED
2014-09-22 08:27:25,435 INFO org.apache.hadoop.mapreduce.Job (main): map 38% reduce 4%
2014-09-22 08:27:54,543 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000190_1, Status : FAILED
2014-09-22 08:28:54,751 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000103_2, Status : FAILED
2014-09-22 08:29:24,851 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000094_2, Status : FAILED
2014-09-22 08:29:24,852 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000200_1, Status : FAILED
2014-09-22 08:29:24,853 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000109_2, Status : FAILED
2014-09-22 08:29:48,931 INFO org.apache.hadoop.mapreduce.Job (main): map 39% reduce 4%
2014-09-22 08:29:54,954 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000108_2, Status : FAILED
2014-09-22 08:30:24,066 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000133_2, Status : FAILED
2014-09-22 08:32:54,599 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000199_1, Status : FAILED
2014-09-22 08:32:54,600 INFO org.apache.hadoop.mapreduce.Job (main): Task Id : attempt_1411372099942_0001_m_000136_2, Status : FAILED
2014-09-22 08:34:25,910 INFO org.apache.hadoop.mapreduce.Job (main): map 100% reduce 100%
2014-09-22 08:34:25,915 INFO org.apache.hadoop.mapreduce.Job (main): Job job_1411372099942_0001 failed with state FAILED due to: Task failed task_1411372099942_0001_m_000010
Job failed as tasks failed. failedMaps:1 failedReduces:0
Attempts for: s-1W7C8YIFC87Y8, Job 1411372099942_0001, Task
2014-09-22 08:18:27,238 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:27,322 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:28,462 INFO main org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-09-22 08:18:28,496 INFO main org.apache.hadoop.metrics2.sink.cloudwatch.CloudWatchSink: Initializing the CloudWatchSink for metrics.
2014-09-22 08:18:28,795 INFO main org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink file started
2014-09-22 08:18:28,967 INFO main org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 300 second(s).
2014-09-22 08:18:28,967 INFO main org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2014-09-22 08:18:28,982 INFO main org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2014-09-22 08:18:28,983 INFO main org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1411372099942_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#3fc15856)
2014-09-22 08:18:29,157 INFO main org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2014-09-22 08:18:29,880 INFO main org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /mnt/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411372099942_0001,/mnt1/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411372099942_0001,/mnt2/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411372099942_0001
2014-09-22 08:18:30,164 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:30,182 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:31,063 INFO main org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2014-09-22 08:18:32,100 INFO main org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2014-09-22 08:18:32,605 INFO main org.apache.hadoop.mapred.MapTask: Processing split: s3://aws-publicdatasets/common-crawl/parse-output/segment/1346876860819/metadata-00122:0+67108864
2014-09-22 08:18:32,810 INFO main amazon.emr.metrics.MetricsSaver: MetricsSaver YarnChild root:hdfs:///mnt/var/em/ period:120 instanceId:i-ec84e7c1 jobflow:j-27XODJ8WMW4VP
2014-09-22 08:18:33,205 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:33,219 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:33,221 INFO main com.amazon.ws.emr.hadoop.fs.guice.EmrFSBaseModule: Consistency disabled, using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as FileSystem implementation.
2014-09-22 08:18:35,024 INFO main com.amazon.ws.emr.hadoop.fs.EmrFileSystem: Using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as filesystem implementation
2014-09-22 08:18:36,001 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2014-09-22 08:18:36,002 WARN main org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2014-09-22 08:18:36,024 INFO main org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 52428796(209715184)
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 200
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: soft limit at 167772160
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 209715200
2014-09-22 08:18:36,514 INFO main org.apache.hadoop.mapred.MapTask: kvstart = 52428796; length = 13107200
2014-09-22 08:18:36,597 INFO main com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem: Opening 's3://aws-publicdatasets/common-crawl/parse-output/segment/1346876860819/metadata-00122' for reading
2014-09-22 08:18:36,716 INFO main org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
2014-09-22 08:18:36,720 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor ht t p: //. gz
2014-09-22 08:18:36,726 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2014-09-22 08:18:36,726 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2014-09-22 08:18:36,727 INFO main org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
Edited by: paraxx on Sep 22, 2014 10:25 AM
task_1411372099942_0001_m_000010 has timed out. Try increasing the timeout configuration parameter.
mapreduce.task.timeout=12000000

hadoop: reduce happened between flush map output and finish spill before maps done

I'm new to hadoop, and i'm trying the examples wordcount/secondsort in src/examples.
wordcount test environment:
input:
file01.txt
file02.txt
secondsort test environment:
input:
sample01.txt
sample02.txt
Which means both the two test would have 2 paths to process.
I print some log info trying to understand the process of map/reduce.
See what's between Starting flush of map output and Finished spill 0:
the wordcount program has another two reduce task before a final reduce while
the secondsort program just do the reduce once and it's done.
Since these programs are so "small", i dont think the io.sort.mb/io.sort.refactor would affect this.
Can anybody explain this?
Thanks for your patience for my broken Englisth and the long log.
These are the log info (i cut some useless info to make it short):
wordcount log:
[hadoop#localhost ~]$ hadoop jar test.jar com.abc.example.test wordcount output
13/08/07 18:14:05 INFO mapred.FileInputFormat: Total input paths to process : 2
13/08/07 18:14:06 INFO mapred.JobClient: Running job: job_local_0001
13/08/07 18:14:06 INFO util.ProcessTree: setsid exited with exit code 0
...
13/08/07 18:14:06 INFO mapred.MapTask: numReduceTasks: 1
13/08/07 18:14:06 INFO mapred.MapTask: io.sort.mb = 100
13/08/07 18:14:06 INFO mapred.MapTask: data buffer = 79691776/99614720
13/08/07 18:14:06 INFO mapred.MapTask: record buffer = 262144/327680
Mapper: 0 | Hello Hadoop GoodBye Hadoop
13/08/07 18:14:06 INFO mapred.MapTask: **Starting flush of map output**
Reduce: GoodBye
Reduce: GoodBye | 1
Reduce: Hadoop
Reduce: Hadoop | 1
Reduce: Hadoop | 1
Reduce: Hello
Reduce: Hello | 1
13/08/07 18:14:06 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
13/08/07 18:14:06 INFO mapred.LocalJobRunner: hdfs://localhost:8020/user/hadoop/wordcount/file02.txt:0+28
13/08/07 18:14:06 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
13/08/07 18:14:06 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#4d16ffed
13/08/07 18:14:06 INFO mapred.MapTask: numReduceTasks: 1
13/08/07 18:14:06 INFO mapred.MapTask: io.sort.mb = 100
13/08/07 18:14:06 INFO mapred.MapTask: data buffer = 79691776/99614720
13/08/07 18:14:06 INFO mapred.MapTask: record buffer = 262144/327680
13/08/07 18:14:06 INFO mapred.MapTask: **Starting flush of map output**
Reduce: Bye
Reduce: Bye | 1
Reduce: Hello
Reduce: Hello | 1
Reduce: world
Reduce: world | 1
Reduce: world | 1
13/08/07 18:14:06 INFO mapred.MapTask: **Finished spill 0**
13/08/07 18:14:06 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
13/08/07 18:14:06 INFO mapred.LocalJobRunner: hdfs://localhost:8020/user/hadoop/wordcount/file01.txt:0+22
13/08/07 18:14:06 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
13/08/07 18:14:06 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#1f3c0665
13/08/07 18:14:06 INFO mapred.LocalJobRunner:
13/08/07 18:14:06 INFO mapred.Merger: Merging 2 sorted segments
13/08/07 18:14:06 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 77 bytes
13/08/07 18:14:06 INFO mapred.LocalJobRunner:
Reduce: Bye
Reduce: Bye | 1
Reduce: GoodBye
Reduce: GoodBye | 1
Reduce: Hadoop
Reduce: Hadoop | 2
Reduce: Hello
Reduce: Hello | 1
Reduce: Hello | 1
Reduce: world
Reduce: world | 2
13/08/07 18:14:06 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
...
13/08/07 18:14:07 INFO mapred.JobClient: Reduce input groups=5
13/08/07 18:14:07 INFO mapred.JobClient: Combine output records=6
13/08/07 18:14:07 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
13/08/07 18:14:07 INFO mapred.JobClient: Reduce output records=5
13/08/07 18:14:07 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
13/08/07 18:14:07 INFO mapred.JobClient: Map output records=8
secondsort log info:
[hadoop#localhost ~]$ hadoop jar example.jar com.abc.example.example secondsort output
13/08/07 17:00:11 INFO input.FileInputFormat: Total input paths to process : 2
13/08/07 17:00:11 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/07 17:00:12 INFO mapred.JobClient: Running job: job_local_0001
13/08/07 17:00:12 INFO util.ProcessTree: setsid exited with exit code 0
13/08/07 17:00:12 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#57d94c7b
13/08/07 17:00:12 INFO mapred.MapTask: io.sort.mb = 100
13/08/07 17:00:12 INFO mapred.MapTask: data buffer = 79691776/99614720
13/08/07 17:00:12 INFO mapred.MapTask: record buffer = 262144/327680
Map: 0 | 5 49
Map: 5 | 9 57
Map: 10 | 19 46
Map: 16 | 3 21
Map: 21 | 9 48
Map: 26 | 7 57
...
13/08/07 17:00:12 INFO mapred.MapTask: **Starting flush of map output**
13/08/07 17:00:12 INFO mapred.MapTask: **Finished spill 0**
13/08/07 17:00:12 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
13/08/07 17:00:12 INFO mapred.LocalJobRunner:
13/08/07 17:00:12 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
13/08/07 17:00:12 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#f3a1ea1
13/08/07 17:00:12 INFO mapred.MapTask: io.sort.mb = 100
13/08/07 17:00:12 INFO mapred.MapTask: data buffer = 79691776/99614720
13/08/07 17:00:12 INFO mapred.MapTask: record buffer = 262144/327680
Map: 0 | 20 21
Map: 6 | 50 51
Map: 12 | 50 52
Map: 18 | 50 53
Map: 24 | 50 54
...
13/08/07 17:00:12 INFO mapred.MapTask: **Starting flush of map output**
13/08/07 17:00:12 INFO mapred.MapTask: **Finished spill 0**
13/08/07 17:00:12 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
13/08/07 17:00:12 INFO mapred.LocalJobRunner:
13/08/07 17:00:12 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
13/08/07 17:00:12 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#cee4e92
13/08/07 17:00:12 INFO mapred.LocalJobRunner:
13/08/07 17:00:12 INFO mapred.Merger: Merging 2 sorted segments
13/08/07 17:00:12 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 1292 bytes
13/08/07 17:00:12 INFO mapred.LocalJobRunner:
Reduce: 0:35 -----------------
Reduce: 0:35 | 35
Reduce: 0:54 -----------------
...
13/08/07 17:00:12 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
13/08/07 17:00:12 INFO mapred.LocalJobRunner:
13/08/07 17:00:12 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
13/08/07 17:00:12 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to output
13/08/07 17:00:12 INFO mapred.LocalJobRunner: reduce > reduce
13/08/07 17:00:12 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
13/08/07 17:00:13 INFO mapred.JobClient: map 100% reduce 100%
13/08/07 17:00:13 INFO mapred.JobClient: Job complete: job_local_0001
13/08/07 17:00:13 INFO mapred.JobClient: Counters: 22
13/08/07 17:00:13 INFO mapred.JobClient: File Output Format Counters
13/08/07 17:00:13 INFO mapred.JobClient: Bytes Written=4787
...
13/08/07 17:00:13 INFO mapred.JobClient: SPLIT_RAW_BYTES=236
13/08/07 17:00:13 INFO mapred.JobClient: Reduce input records=92
PS: The main()s for others to check out.
wordcount:
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(test.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
secondsort:
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException
{
Configuration conf = new Configuration();
Job job = new Job(conf, "secondarysort");
job.setJarByClass(example.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setPartitionerClass(FirstPartitioner.class);
job.setGroupingComparatorClass(GroupingComparator.class);
job.setMapOutputKeyClass(IntPair.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
Combine output records=6
This says it all: the reduce function is used both as a combiner and a reducer. So what you are seeing is output from the combiner. The combiner is (sometimes) invoked when output is spilled.
I think you should have added your code, at least the part in the main() to show us how your job is set up. This would make it easier to answer your questions.
I think the lines such as
Reduce: GoodBye
Reduce: GoodBye | 1
are println(...)in your source codes, and you need to check the source code.

Resources