Mapreduce job is getting stuck - hadoop

Am running map reduce program, But its getting stuck.
19/09/16 09:35:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/09/16 09:35:04 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
19/09/16 09:35:05 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/09/16 09:35:05 INFO input.FileInputFormat: Total input files to process : 1
19/09/16 09:35:06 INFO mapreduce.JobSubmitter: number of splits:1
19/09/16 09:35:06 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
19/09/16 09:35:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1568605566346_0002
19/09/16 09:35:07 INFO impl.YarnClientImpl: Submitted application application_1568605566346_0002
19/09/16 09:35:07 INFO mapreduce.Job: The url to track the job: http://ec2-18-222-170-204.us-east-2.compute.amazonaws.com:8088/proxy/application_1568605566346_0002/
19/09/16 09:35:07 INFO mapreduce.Job: Running job: job_1568605566346_0002
Here is my disk avaiability
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 786M 9.5M 776M 2% /run
/dev/sda3 184G 12G 163G 7% /
tmpfs 3.9G 138M 3.7G 4% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda5 125G 21G 98G 18% /home
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 786M 60K 786M 1% /run/user/1000
tmpfs 786M 0 786M 0% /run/user/1001
May I Know please, what going wrong ..? Its just only a single node hadoop cluster.
Thanks

Related

Difference between NameNode heap usage and ResourceManager heap usage (trying to find NameNode heap usage cause)?

What is the difference between NameNode heap usage and ResourceManager heap usage? I am trying to find heavy NameNode heap usage cause.
In the ambari dashboard, I see...
when running some sqoop jobs. Not sure what is causing the NN usage to be so high here (not a lot of experience with hadoop admin stuff)? Is this an unusual amount (only noticed recently)?
Furthermore the sqoop jobs appear to be frozen after 100% completion of the mapreduce task for abnormal amount of time than usual, eg. seeing...
[2020-01-31 14:00:55,193] INFO mapreduce.JobSubmitter: number of splits:12
[2020-01-31 14:00:55,402] INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1579648183118_1085
[2020-01-31 14:00:55,402] INFO mapreduce.JobSubmitter: Executing with tokens: []
[2020-01-31 14:00:55,687] INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.0.0-78/0/resource-types.xml
[2020-01-31 14:00:55,784] INFO impl.YarnClientImpl: Submitted application application_1579648183118_1085
[2020-01-31 14:00:55,837] mapreduce.Job: The url to track the job: http://hw001.ucera.local:8088/proxy/application_1579648183118_1085/
[2020-01-31 14:00:55,837] mapreduce.Job: Running job: job_1579648183118_1085
[2020-01-31 14:01:02,964] mapreduce.Job: Job job_1579648183118_1085 running in uber mode : false
[2020-01-31 14:01:02,965] mapreduce.Job: map 0% reduce 0%
[2020-01-31 14:01:18,178] mapreduce.Job: map 8% reduce 0%
[2020-01-31 14:02:21,552] mapreduce.Job: map 17% reduce 0%
[2020-01-31 14:04:55,239] mapreduce.Job: map 25% reduce 0%
[2020-01-31 14:05:36,417] mapreduce.Job: map 33% reduce 0%
[2020-01-31 14:05:37,424] mapreduce.Job: map 42% reduce 0%
[2020-01-31 14:05:40,440] mapreduce.Job: map 50% reduce 0%
[2020-01-31 14:05:41,444] mapreduce.Job: map 58% reduce 0%
[2020-01-31 14:05:44,455] mapreduce.Job: map 67% reduce 0%
[2020-01-31 14:05:52,484] mapreduce.Job: map 75% reduce 0%
[2020-01-31 14:05:56,499] mapreduce.Job: map 83% reduce 0%
[2020-01-31 14:05:59,528] mapreduce.Job: map 92% reduce 0%
[2020-01-31 14:06:00,534] INFO mapreduce.Job: map 100% reduce 0%
<...after some time longer than usual...>
[2020-01-31 14:10:05,446] INFO mapreduce.Job: Job job_1579648183118_1085 completed successfully
My hadoop version
[airflow#airflowetl root]$ hadoop version
Hadoop 3.1.1.3.1.0.0-78
Source code repository git#github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64
Compiled by jenkins on 2018-12-06T12:26Z
Compiled with protoc 2.5.0
From source with checksum eab9fa2a6aa38c6362c66d8df75774
This command was run using /usr/hdp/3.1.0.0-78/hadoop/hadoop-common-3.1.1.3.1.0.0-78.jar
Anyone with more hadoop experience know what could be going on here? Any debugging advice?
Namenode heap is mostly determined by the number of file blocks that are stored in HDFS. In particular, many small files or many files being written at once would cause a large heap.
The ResourceManager is not correlated with the namenode. It's heap would be determinate on the number of YARN jobs that are actively being tracked
In a cluster I've maintained, the namenode heap was 32G, and I think the ResourceManager was only 8GB

Cannot create directory in HDFS. Name node is in safe mode

I'm have deployed Hadoop in docker which is running on aws ec2 ubuntu AMI instance.
when I try to create a directory in hdfs it says Cannot create directory. Name node is in safe mode
below are the properties in hdfs-site.xml
name: dfs.replication
value: 1
name: dfs.namenode.name.dir
value: /usr/local/hadoop/data
when I check the hdfs report, it gives the below output.
bash-4.1# hdfs dfsadmin -report
19/01/05 12:34:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is ON
Configured Capacity: 0 (0 B)
Present Capacity: 335872 (328 KB)
DFS Remaining: 0 (0 B)
DFS Used: 335872 (328 KB)
DFS Used%: 100.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
below is some detail about namenode.
bash-4.1# hdfs dfs -df
19/01/05 12:37:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Filesystem Size Used Available Use%
hdfs://0cd4da30c603:9000 0 335872 0 Infinity%
if I set to leave safe mode, within secs it's going back to safe mode.
bash-4.1# hdfs dfsadmin -safemode leave
19/01/05 12:42:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is OFF
bash-4.1# hdfs dfsadmin -safemode get
19/01/05 12:42:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is ON
below is my file system information
bash-4.1# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 25G 6.2G 19G 26% /
tmpfs 64M 0 64M 0% /dev
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/xvda1 25G 6.2G 19G 26% /data/lab
/dev/xvda1 25G 6.2G 19G 26% /etc/resolv.conf
/dev/xvda1 25G 6.2G 19G 26% /etc/hostname
/dev/xvda1 25G 6.2G 19G 26% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 492M 0 492M 0% /proc/acpi
tmpfs 64M 0 64M 0% /proc/kcore
tmpfs 64M 0 64M 0% /proc/keys
tmpfs 64M 0 64M 0% /proc/timer_list
tmpfs 64M 0 64M 0% /proc/sched_debug
tmpfs 492M 0 492M 0% /proc/scsi
tmpfs 492M 0 492M 0% /sys/firmware
what I'm expecting is to create a directory in hdfs to perform a MapReduce operation

Hive Testbench data generation failed

I cloned the Hive Testbench to try to run Hive benchmark on a hadoop cluster built with Apache binary distributions of Hadoop v2.9.0, Hive 2.3.0 and Tez 0.9.0.
I managed to finish the build of the two data generators: TPC-H and TPC-DS. Then the next step of data generation on either TPC-H and TPC-DS are all failed. The failure is very consistent that each time it would failed at the exactly same step and produce same error messages.
For TPC-H, the data generation screen output is here:
$ ./tpch-setup.sh 10
ls: `/tmp/tpch-generate/10/lineitem': No such file or directory
Generating data at scale factor 10.
...
18/01/02 14:43:00 INFO mapreduce.Job: Running job: job_1514226810133_0050
18/01/02 14:43:01 INFO mapreduce.Job: Job job_1514226810133_0050 running in uber mode : false
18/01/02 14:43:01 INFO mapreduce.Job: map 0% reduce 0%
18/01/02 14:44:38 INFO mapreduce.Job: map 10% reduce 0%
18/01/02 14:44:39 INFO mapreduce.Job: map 20% reduce 0%
18/01/02 14:44:46 INFO mapreduce.Job: map 30% reduce 0%
18/01/02 14:44:48 INFO mapreduce.Job: map 40% reduce 0%
18/01/02 14:44:58 INFO mapreduce.Job: map 70% reduce 0%
18/01/02 14:45:14 INFO mapreduce.Job: map 80% reduce 0%
18/01/02 14:45:15 INFO mapreduce.Job: map 90% reduce 0%
18/01/02 14:45:23 INFO mapreduce.Job: map 100% reduce 0%
18/01/02 14:45:23 INFO mapreduce.Job: Job job_1514226810133_0050 completed successfully
18/01/02 14:45:23 INFO mapreduce.Job: Counters: 0
SLF4J: Class path contains multiple SLF4J bindings.
...
ls: `/tmp/tpch-generate/10/lineitem': No such file or directory
Data generation failed, exiting.
For TPC-DS, the error messages are here:
$ ./tpcds-setup.sh 10
...
18/01/02 22:13:58 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout
18/01/02 22:13:58 INFO client.RMProxy: Connecting to ResourceManager at /192.168.10.15:8032
18/01/02 22:13:59 INFO input.FileInputFormat: Total input files to process : 1
18/01/02 22:13:59 INFO mapreduce.JobSubmitter: number of splits:10
18/01/02 22:13:59 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
18/01/02 22:13:59 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
18/01/02 22:13:59 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514226810133_0082
18/01/02 22:14:00 INFO client.YARNRunner: Number of stages: 1
18/01/02 22:14:00 INFO Configuration.deprecation: mapred.job.map.memory.mb is deprecated. Instead, use mapreduce.map.memory.mb
18/01/02 22:14:00 INFO client.TezClient: Tez Client Version: [ component=tez-api, version=0.9.0, revision=0873a0118a895ca84cbdd221d8ef56fedc4b43d0, SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, buildTime=2017-07-18T05:41:23Z ]
18/01/02 22:14:00 INFO client.RMProxy: Connecting to ResourceManager at /192.168.10.15:8032
18/01/02 22:14:00 INFO client.TezClient: Submitting DAG application with id: application_1514226810133_0082
18/01/02 22:14:00 INFO client.TezClientUtils: Using tez.lib.uris value from configuration: hdfs://192.168.10.15:8020/apps/tez,hdfs://192.168.10.15:8020/apps/tez/lib/
18/01/02 22:14:00 INFO client.TezClientUtils: Using tez.lib.uris.classpath value from configuration: null
18/01/02 22:14:00 INFO client.TezClient: Tez system stage directory hdfs://192.168.10.15:8020/tmp/hadoop-yarn/staging/rapids/.staging/job_1514226810133_0082/.tez/application_1514226810133_0082 doesn't exist and is created
18/01/02 22:14:01 INFO client.TezClient: Submitting DAG to YARN, applicationId=application_1514226810133_0082, dagName=GenTable+all_10
18/01/02 22:14:01 INFO impl.YarnClientImpl: Submitted application application_1514226810133_0082
18/01/02 22:14:01 INFO client.TezClient: The url to track the Tez AM: http://boray05:8088/proxy/application_1514226810133_0082/
18/01/02 22:14:05 INFO client.RMProxy: Connecting to ResourceManager at /192.168.10.15:8032
18/01/02 22:14:05 INFO mapreduce.Job: The url to track the job: http://boray05:8088/proxy/application_1514226810133_0082/
18/01/02 22:14:05 INFO mapreduce.Job: Running job: job_1514226810133_0082
18/01/02 22:14:06 INFO mapreduce.Job: Job job_1514226810133_0082 running in uber mode : false
18/01/02 22:14:06 INFO mapreduce.Job: map 0% reduce 0%
18/01/02 22:15:51 INFO mapreduce.Job: map 10% reduce 0%
18/01/02 22:15:54 INFO mapreduce.Job: map 20% reduce 0%
18/01/02 22:15:55 INFO mapreduce.Job: map 40% reduce 0%
18/01/02 22:15:56 INFO mapreduce.Job: map 50% reduce 0%
18/01/02 22:16:07 INFO mapreduce.Job: map 60% reduce 0%
18/01/02 22:16:09 INFO mapreduce.Job: map 70% reduce 0%
18/01/02 22:16:11 INFO mapreduce.Job: map 80% reduce 0%
18/01/02 22:16:19 INFO mapreduce.Job: map 90% reduce 0%
18/01/02 22:19:54 INFO mapreduce.Job: map 100% reduce 0%
18/01/02 22:19:54 INFO mapreduce.Job: Job job_1514226810133_0082 completed successfully
18/01/02 22:19:54 INFO mapreduce.Job: Counters: 0
...
TPC-DS text data generation complete.
Loading text data into external tables.
Optimizing table time_dim (2/24).
Optimizing table date_dim (1/24).
Optimizing table item (3/24).
Optimizing table customer (4/24).
Optimizing table household_demographics (6/24).
Optimizing table customer_demographics (5/24).
Optimizing table customer_address (7/24).
Optimizing table store (8/24).
Optimizing table promotion (9/24).
Optimizing table warehouse (10/24).
Optimizing table ship_mode (11/24).
Optimizing table reason (12/24).
Optimizing table income_band (13/24).
Optimizing table call_center (14/24).
Optimizing table web_page (15/24).
Optimizing table catalog_page (16/24).
Optimizing table web_site (17/24).
make: *** [store_sales] Error 2
make: *** Waiting for unfinished jobs....
make: *** [store_returns] Error 2
Data loaded into database tpcds_bin_partitioned_orc_10.
I notice the targeted temporary HDFS directory during the job running and after the failure are always empty except for the generated sub-directories.
Now I even don't know if the failure is due to Hadoop configuration issues, or mismatch software versions or any other reasons. Any help?
I had similar issue when running this job. When I specified the hdfs location to this script where I had permissions to write to, the script was successful.
./tpcds-setup.sh 10 <hdfs_directory_path>
I still get this error when the script kicks off:
Data loaded into database tpcds_bin_partitioned_orc_10.
ls: `<hdfs_directory_path>/10': No such file or directory
However the script runs successfully and the data is generated and loaded into the hive tables at the end.
Hope that helps.

hadoop yarn single node performance tuning

I have hadoop 2.5.2 single mode installation on my Ubuntu VM, which is: 4-core, 3GHz per core; 4G memory. This VM is not for production, only for demo and learning.
Then, I wrote a vey simple map-reduce application using python, and use this application to process 49 xmls. All these xml files are small-size, hundreds of lines each. So, I expected a quick process. But, big22 surprise to me, it took more than 20 minutes to finish the job (the output of the job is correct.). Below is the output metrics :
14/12/15 19:37:55 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/12/15 19:37:57 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/12/15 19:38:03 INFO mapred.FileInputFormat: Total input paths to process : 49
14/12/15 19:38:06 INFO mapreduce.JobSubmitter: number of splits:49
14/12/15 19:38:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1418368500264_0005
14/12/15 19:38:10 INFO impl.YarnClientImpl: Submitted application application_1418368500264_0005
14/12/15 19:38:10 INFO mapreduce.Job: Running job: job_1418368500264_0005
14/12/15 19:38:59 INFO mapreduce.Job: Job job_1418368500264_0005 running in uber mode : false
14/12/15 19:38:59 INFO mapreduce.Job: map 0% reduce 0%
14/12/15 19:39:42 INFO mapreduce.Job: map 2% reduce 0%
14/12/15 19:40:05 INFO mapreduce.Job: map 4% reduce 0%
14/12/15 19:40:28 INFO mapreduce.Job: map 6% reduce 0%
14/12/15 19:40:49 INFO mapreduce.Job: map 8% reduce 0%
14/12/15 19:41:10 INFO mapreduce.Job: map 10% reduce 0%
14/12/15 19:41:29 INFO mapreduce.Job: map 12% reduce 0%
14/12/15 19:41:50 INFO mapreduce.Job: map 14% reduce 0%
14/12/15 19:42:08 INFO mapreduce.Job: map 16% reduce 0%
14/12/15 19:42:28 INFO mapreduce.Job: map 18% reduce 0%
14/12/15 19:42:49 INFO mapreduce.Job: map 20% reduce 0%
14/12/15 19:43:08 INFO mapreduce.Job: map 22% reduce 0%
14/12/15 19:43:28 INFO mapreduce.Job: map 24% reduce 0%
14/12/15 19:43:48 INFO mapreduce.Job: map 27% reduce 0%
14/12/15 19:44:09 INFO mapreduce.Job: map 29% reduce 0%
14/12/15 19:44:29 INFO mapreduce.Job: map 31% reduce 0%
14/12/15 19:44:49 INFO mapreduce.Job: map 33% reduce 0%
14/12/15 19:45:09 INFO mapreduce.Job: map 35% reduce 0%
14/12/15 19:45:28 INFO mapreduce.Job: map 37% reduce 0%
14/12/15 19:45:49 INFO mapreduce.Job: map 39% reduce 0%
14/12/15 19:46:09 INFO mapreduce.Job: map 41% reduce 0%
14/12/15 19:46:29 INFO mapreduce.Job: map 43% reduce 0%
14/12/15 19:46:49 INFO mapreduce.Job: map 45% reduce 0%
14/12/15 19:47:09 INFO mapreduce.Job: map 47% reduce 0%
14/12/15 19:47:29 INFO mapreduce.Job: map 49% reduce 0%
14/12/15 19:47:49 INFO mapreduce.Job: map 51% reduce 0%
14/12/15 19:48:08 INFO mapreduce.Job: map 53% reduce 0%
14/12/15 19:48:28 INFO mapreduce.Job: map 55% reduce 0%
14/12/15 19:48:48 INFO mapreduce.Job: map 57% reduce 0%
14/12/15 19:49:09 INFO mapreduce.Job: map 59% reduce 0%
14/12/15 19:49:29 INFO mapreduce.Job: map 61% reduce 0%
14/12/15 19:49:55 INFO mapreduce.Job: map 63% reduce 0%
14/12/15 19:50:23 INFO mapreduce.Job: map 65% reduce 0%
14/12/15 19:50:53 INFO mapreduce.Job: map 67% reduce 0%
14/12/15 19:51:22 INFO mapreduce.Job: map 69% reduce 0%
14/12/15 19:51:50 INFO mapreduce.Job: map 71% reduce 0%
14/12/15 19:52:18 INFO mapreduce.Job: map 73% reduce 0%
14/12/15 19:52:48 INFO mapreduce.Job: map 76% reduce 0%
14/12/15 19:53:18 INFO mapreduce.Job: map 78% reduce 0%
14/12/15 19:53:48 INFO mapreduce.Job: map 80% reduce 0%
14/12/15 19:54:18 INFO mapreduce.Job: map 82% reduce 0%
14/12/15 19:54:48 INFO mapreduce.Job: map 84% reduce 0%
14/12/15 19:55:19 INFO mapreduce.Job: map 86% reduce 0%
14/12/15 19:55:48 INFO mapreduce.Job: map 88% reduce 0%
14/12/15 19:56:16 INFO mapreduce.Job: map 90% reduce 0%
14/12/15 19:56:44 INFO mapreduce.Job: map 92% reduce 0%
14/12/15 19:57:14 INFO mapreduce.Job: map 94% reduce 0%
14/12/15 19:57:45 INFO mapreduce.Job: map 96% reduce 0%
14/12/15 19:58:15 INFO mapreduce.Job: map 98% reduce 0%
14/12/15 19:58:46 INFO mapreduce.Job: map 100% reduce 0%
14/12/15 19:59:20 INFO mapreduce.Job: map 100% reduce 100%
14/12/15 19:59:28 INFO mapreduce.Job: Job job_1418368500264_0005 completed successfully
14/12/15 19:59:30 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=17856
FILE: Number of bytes written=5086434
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=499030
HDFS: Number of bytes written=10049
HDFS: Number of read operations=150
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=49
Launched reduce tasks=1
Data-local map tasks=49
Total time spent by all maps in occupied slots (ms)=8854232
Total time spent by all reduces in occupied slots (ms)=284672
Total time spent by all map tasks (ms)=1106779
Total time spent by all reduce tasks (ms)=35584
Total vcore-seconds taken by all map tasks=1106779
Total vcore-seconds taken by all reduce tasks=35584
Total megabyte-seconds taken by all map tasks=1133341696
Total megabyte-seconds taken by all reduce tasks=36438016
Map-Reduce Framework
Map input records=9352
Map output records=296
Map output bytes=17258
Map output materialized bytes=18144
Input split bytes=6772
Combine input records=0
Combine output records=0
Reduce input groups=53
Reduce shuffle bytes=18144
Reduce input records=296
Reduce output records=52
Spilled Records=592
Shuffled Maps =49
Failed Shuffles=0
Merged Map outputs=49
GC time elapsed (ms)=33590
CPU time spent (ms)=191390
Physical memory (bytes) snapshot=13738057728
Virtual memory (bytes) snapshot=66425016320
Total committed heap usage (bytes)=10799808512
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=492258
File Output Format Counters
Bytes Written=10049
14/12/15 19:59:30 INFO streaming.StreamJob: Output directory: /data_output/sb50projs_1_output
As a newbie to hadoop, for this crazy unreasonable performance, I have several questions:
how to configure my hadoop/yarn/mapreduce to make the whole environment more convenient for trial usage?
I understand hadoop is designed for huge-data and big files. But for a trial environment, my files are small and my data is very limited, which default configuration items should I change? I have changed "dfs.blocksize" of hdfs-site.xml to a smaller value to match my small files, but seems no big enhancements. I know there are some JVM configuration items in yarn-site.xml and mapred-site.xml, but I am not sure about how to adjust them.
how to read hadoop logs
Under the logs folder, there are separate log files for nodemanager/resourcemanager/namenode/datanode. I tried to read these files to understand how the 20 minutes are spent during the process, but it's not easy for a newbie like me. So I wonder is there any tool/UI could help me to analyze the logs.
basic performance tuning tools
Actually I have googled around for this question, and I got a bunch of names like Ganglia/Nagios/Vaidya/Ambari. I want to know, which tool is best analyse the issue like , "why it took 20 minutes to do such a simple job?".
big number of hadoop processes
Even if there is no job running on my hadoop, I found around 100 hadoop processes on my VM, like below (I am using htop, and sort the result by memory). Is this normal for hadoop ? Or am I incorrect for some environment configuration?
You don't have to change anything.
The default configuration is done for small environment. You may change it if you grow the environment. Ant there is a lot of params and a lot of time for fine tuning.
But I admit your configuration is smaller than the usual ones for tests.
The log you have to read isn't the services ones but the job ones. Find them in /var/log/hadoop-yarn/containers/
If you want a better view of your MR, use the web interface on http://127.0.0.1:8088/. You will see your job's progression in real time.
IMO, Basic tuning = use hadoop web interfaces. There are plenty available natively.
I think you find your problem. This can be nomal, or not.
But quickly, YARN launch MR to use all the available memory :
Available memory is set in your yarn-site.xml : yarn.nodemanager.resource.memory-mb (default to 8 Gio).
Memory for a task is defined in mapred-site.xml or in the task itself by the property : mapreduce.map.memory.mb (default to 1536 Mio)
So :
Change the available memory for your nodemanager (to 3Gio, in order to let 1 Gio for the system)
Change the memory available for hadoop services (-Xmx in hadoop-env.sh, yarn-env.sh) (system + each hadoop services (namenode / datanode / ressourcemanager / nodemanager) < 1 Gio.
Change the memory for your map tasks (512 Mio ?). The lesser it is, more task can be executed in the same time.
Change yarn.scheduler.minimum-allocation-mb to 512 in yarn-site.xml to allow mappers with less than 1 Gio of memory.
I hope this will help you.

Hadoop mapreduce running very slowly

I am using a 4datanode/1namenode hadoop cluster with version 1.1.2 installed in xenserver as vms. I had a 1GB text file and tried to do wordcount. map took 2hrs and reducer just hangs. A normal perl script finished the job in 10 minutes. Looks like something missing in my setup.
Even for small files in Kbs took little long.
hadoop#master ~]$ hadoop jar /usr/share/hadoop/hadoop-examples-1.1.2.jar wordcount huge out
13/05/29 10:45:09 INFO input.FileInputFormat: Total input paths to process : 1
13/05/29 10:45:09 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/05/29 10:45:09 WARN snappy.LoadSnappy: Snappy native library not loaded
13/05/29 10:45:11 INFO mapred.JobClient: Running job: job_201305290801_0002
13/05/29 10:45:12 INFO mapred.JobClient: map 0% reduce 0%
13/05/29 10:57:14 INFO mapred.JobClient: map 2% reduce 0%
13/05/29 10:58:01 INFO mapred.JobClient: map 3% reduce 0%
13/05/29 10:58:53 INFO mapred.JobClient: map 4% reduce 0%
13/05/29 10:58:54 INFO mapred.JobClient: map 5% reduce 0%
13/05/29 10:59:33 INFO mapred.JobClient: map 6% reduce 0%
13/05/29 11:01:52 INFO mapred.JobClient: map 7% reduce 0%
13/05/29 11:03:02 INFO mapred.JobClient: map 8% reduce 0%
13/05/29 11:03:20 INFO mapred.JobClient: Task Id : attempt_201305290801_0002_m_000002_0, Status : FAILED
Task attempt_201305290801_0002_m_000002_0 failed to report status for 604 seconds. Killing!
13/05/29 11:03:28 INFO mapred.JobClient: Task Id : attempt_201305290801_0002_m_000003_0, Status : FAILED
Task attempt_201305290801_0002_m_000003_0 failed to report status for 604 seconds. Killing!
13/05/29 11:03:29 INFO mapred.JobClient: map 9% reduce 0%
13/05/29 11:04:07 INFO mapred.JobClient: map 10% reduce 0%
13/05/29 11:05:13 INFO mapred.JobClient: map 11% reduce 0%
13/05/29 11:06:34 INFO mapred.JobClient: map 12% reduce 0%
13/05/29 11:06:59 INFO mapred.JobClient: map 13% reduce 0%
13/05/29 11:08:14 INFO mapred.JobClient: map 14% reduce 0%
13/05/29 11:08:39 INFO mapred.JobClient: map 15% reduce 0%
13/05/29 11:09:35 INFO mapred.JobClient: map 16% reduce 0%
13/05/29 11:10:03 INFO mapred.JobClient: map 17% reduce 0%
13/05/29 11:10:55 INFO mapred.JobClient: map 18% reduce 0%
13/05/29 11:11:47 INFO mapred.JobClient: map 19% reduce 0%
13/05/29 11:14:05 INFO mapred.JobClient: map 20% reduce 0%
13/05/29 11:15:22 INFO mapred.JobClient: map 21% reduce 0%
13/05/29 11:15:49 INFO mapred.JobClient: map 22% reduce 0%
13/05/29 11:17:09 INFO mapred.JobClient: map 23% reduce 0%
13/05/29 11:18:06 INFO mapred.JobClient: map 24% reduce 0%
13/05/29 11:18:29 INFO mapred.JobClient: map 25% reduce 0%
13/05/29 11:18:53 INFO mapred.JobClient: map 26% reduce 0%
13/05/29 11:20:05 INFO mapred.JobClient: map 27% reduce 0%
13/05/29 11:21:09 INFO mapred.JobClient: map 28% reduce 0%
13/05/29 11:21:45 INFO mapred.JobClient: map 29% reduce 0%
13/05/29 11:22:14 INFO mapred.JobClient: map 30% reduce 0%
13/05/29 11:22:31 INFO mapred.JobClient: map 31% reduce 0%
13/05/29 11:22:32 INFO mapred.JobClient: map 32% reduce 0%
13/05/29 11:23:01 INFO mapred.JobClient: map 33% reduce 0%
13/05/29 11:23:41 INFO mapred.JobClient: map 34% reduce 0%
13/05/29 11:24:29 INFO mapred.JobClient: map 35% reduce 0%
13/05/29 11:25:16 INFO mapred.JobClient: map 36% reduce 0%
13/05/29 11:25:58 INFO mapred.JobClient: map 37% reduce 0%
13/05/29 11:27:09 INFO mapred.JobClient: map 38% reduce 0%
13/05/29 11:27:55 INFO mapred.JobClient: map 39% reduce 0%
13/05/29 11:28:33 INFO mapred.JobClient: map 40% reduce 0%
13/05/29 11:29:50 INFO mapred.JobClient: map 41% reduce 0%
13/05/29 11:30:29 INFO mapred.JobClient: map 42% reduce 0%
13/05/29 11:31:37 INFO mapred.JobClient: map 43% reduce 0%
13/05/29 11:32:10 INFO mapred.JobClient: map 44% reduce 0%
13/05/29 11:32:34 INFO mapred.JobClient: map 45% reduce 0%
13/05/29 11:34:08 INFO mapred.JobClient: map 46% reduce 0%
13/05/29 11:36:01 INFO mapred.JobClient: map 47% reduce 0%
13/05/29 11:36:57 INFO mapred.JobClient: map 48% reduce 0%
13/05/29 11:37:53 INFO mapred.JobClient: map 49% reduce 0%
13/05/29 11:39:50 INFO mapred.JobClient: map 50% reduce 0%
13/05/29 11:42:17 INFO mapred.JobClient: map 51% reduce 0%
13/05/29 11:43:26 INFO mapred.JobClient: map 52% reduce 0%
13/05/29 11:47:55 INFO mapred.JobClient: map 53% reduce 0%
13/05/29 11:48:25 INFO mapred.JobClient: map 54% reduce 0%
13/05/29 11:49:28 INFO mapred.JobClient: map 54% reduce 2%
13/05/29 11:49:31 INFO mapred.JobClient: map 54% reduce 4%
13/05/29 11:50:03 INFO mapred.JobClient: map 55% reduce 4%
13/05/29 11:50:49 INFO mapred.JobClient: map 56% reduce 4%
13/05/29 11:50:54 INFO mapred.JobClient: map 58% reduce 4%
13/05/29 11:51:21 INFO mapred.JobClient: map 59% reduce 4%
13/05/29 11:51:46 INFO mapred.JobClient: Task Id : attempt_201305290801_0002_m_000002_1, Status : FAILED
Task attempt_201305290801_0002_m_000002_1 failed to report status for 685 seconds. Killing!
13/05/29 11:52:09 INFO mapred.JobClient: map 61% reduce 4%
13/05/29 11:52:27 INFO mapred.JobClient: map 62% reduce 4%
13/05/29 11:52:53 INFO mapred.JobClient: map 63% reduce 4%
13/05/29 11:53:36 INFO mapred.JobClient: map 64% reduce 4%
13/05/29 11:53:57 INFO mapred.JobClient: map 65% reduce 4%
13/05/29 11:54:41 INFO mapred.JobClient: map 66% reduce 4%
13/05/29 11:55:51 INFO mapred.JobClient: map 67% reduce 4%
13/05/29 11:57:00 INFO mapred.JobClient: map 68% reduce 4%
13/05/29 11:57:04 INFO mapred.JobClient: map 69% reduce 4%
13/05/29 11:57:11 INFO mapred.JobClient: map 70% reduce 4%
13/05/29 11:57:41 INFO mapred.JobClient: map 71% reduce 4%
13/05/29 11:58:13 INFO mapred.JobClient: map 72% reduce 4%
13/05/29 11:58:45 INFO mapred.JobClient: map 73% reduce 4%
13/05/29 11:59:05 INFO mapred.JobClient: map 74% reduce 4%
13/05/29 11:59:08 INFO mapred.JobClient: map 74% reduce 6%
13/05/29 11:59:42 INFO mapred.JobClient: map 75% reduce 6%
13/05/29 11:59:52 INFO mapred.JobClient: map 76% reduce 6%
13/05/29 12:00:33 INFO mapred.JobClient: map 77% reduce 6%
13/05/29 12:00:53 INFO mapred.JobClient: map 78% reduce 6%
13/05/29 12:01:06 INFO mapred.JobClient: map 79% reduce 6%
13/05/29 12:01:51 INFO mapred.JobClient: map 80% reduce 6%
13/05/29 12:02:29 INFO mapred.JobClient: map 81% reduce 6%
13/05/29 12:02:39 INFO mapred.JobClient: map 82% reduce 6%
13/05/29 12:02:56 INFO mapred.JobClient: map 83% reduce 6%
13/05/29 12:03:36 INFO mapred.JobClient: map 84% reduce 6%
13/05/29 12:04:05 INFO mapred.JobClient: map 85% reduce 6%
13/05/29 12:04:59 INFO mapred.JobClient: map 86% reduce 6%
13/05/29 12:05:47 INFO mapred.JobClient: map 87% reduce 6%
13/05/29 12:07:04 INFO mapred.JobClient: map 88% reduce 6%
13/05/29 12:08:00 INFO mapred.JobClient: map 89% reduce 6%
13/05/29 12:08:32 INFO mapred.JobClient: map 90% reduce 6%
13/05/29 12:09:41 INFO mapred.JobClient: map 91% reduce 6%
13/05/29 12:10:04 INFO mapred.JobClient: map 92% reduce 6%
13/05/29 12:10:17 INFO mapred.JobClient: map 93% reduce 6%
13/05/29 12:10:45 INFO mapred.JobClient: map 94% reduce 6%
13/05/29 12:10:49 INFO mapred.JobClient: map 95% reduce 6%
13/05/29 12:11:00 INFO mapred.JobClient: map 96% reduce 6%
13/05/29 12:11:03 INFO mapred.JobClient: map 97% reduce 6%
13/05/29 12:11:12 INFO mapred.JobClient: map 98% reduce 6%
13/05/29 12:11:17 INFO mapred.JobClient: map 99% reduce 6%
13/05/29 12:12:02 INFO mapred.JobClient: map 100% reduce 6%
^C[hadoop#master ~]$
From the limited information that you gave (console output), it looks like the cluster aint healthy.
13/05/29 11:03:20 INFO mapred.JobClient: Task Id : attempt_201305290801_0002_m_000002_0, Status : FAILED
Task attempt_201305290801_0002_m_000002_0 failed to report status for 604 seconds. Killing!
Tasks were attempted on some node which did not report back to the JobTracker in 10 mins. This caused the task to get re-scheduled again. Diving into more logs, identifying which particular node(s) fails the assigned tasks could be something that you should do.

Resources