hadoop cluster is not running mapreduce jobs - hadoop

I set up a small Hadoop cluster following these instructions but using Hadoop version 2.7.4. The cluster seems to work OK, but I cannot run mapreduce jobs. In particular, when trying the following
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar randomwriter outdenter code here
the job prints
17/11/27 16:35:21 INFO client.RMProxy: Connecting to ResourceManager at
ec2-yyy.eu-central-
1.compute.amazonaws.com/xxx:8032
Running 0 maps.
Job started: Mon Nov 27 16:35:22 UTC 2017
17/11/27 16:35:22 INFO client.RMProxy: Connecting to ResourceManager at
ec2-yyy.eu-central-
1.compute.amazonaws.com/xxx:8032
17/11/27 16:35:22 INFO mapreduce.JobSubmitter: number of splits:0
17/11/27 16:35:22 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1511799491035_0006
17/11/27 16:35:22 INFO impl.YarnClientImpl: Submitted application
application_1511799491035_0006
17/11/27 16:35:22 INFO mapreduce.Job: The url to track the job:
http://ec2-yyy.eu-central-
1.compute.amazonaws.com:8088/proxy/application_1511799491035_0006/
17/11/27 16:35:22 INFO mapreduce.Job: Running job:
job_1511799491035_0006
and never gets past this state.
My yarn-site.xml looks as follows
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ec2-yyy.eu-central-1.compute.amazonaws.com</value>
</property>
</configuration>
My mapred-site.xml looks as follows
<configuration>
<property>
<name>mapreduce.jobtracker.address</name>
<value>ec2-yyy.eu-central-1.compute.amazonaws.com:54311</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Do you have an idea how I could approach this issue?
Thanks
c14

Related

Facing issue while running MR Job on Yarn

I am trying run a MR job on Yarn. While running the MR job , the job gets stuck while cleaning up the staging dir. created by the process. below is the log snippet
2022-03-24 00:12:48 IST INFO [defaultEventExecutorGroup-2-1] org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at localhost/127.0.0.1:8032
2022-03-24 00:12:49 IST INFO [defaultEventExecutorGroup-2-1] org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 3
2022-03-24 00:12:49 IST INFO [defaultEventExecutorGroup-2-1] org.apache.hadoop.mapreduce.JobSubmitter - number of splits:3
2022-03-24 00:12:49 IST INFO [defaultEventExecutorGroup-2-1] org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1648055218115_0023
2022-03-24 00:12:49 IST INFO [defaultEventExecutorGroup-2-1] org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/pk/.staging/job_1648055218115_0023
Below is the dir structure on HDFS
drwx------ - pk supergroup 0 2022-03-24 00:12 /user/pk/.staging
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
Can someone suggest how we can resolve this issue ?

Why does my MapReduce job get stuck at Map 0% Reduce 0%?

I'm testing a filesystem on Hadoop 3.3.1, and every time I run a MapReduce job it gets stuck at "map 0% reduce 0%", meanwhile everything (containers, applications, etc.) is stuck at "RUNNING". Does anybody know why? Or what should I do to find out what the real problem is?
Here's the console output:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount wordcount/input wordcount/output
2022-02-27 15:17:21,136 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at master/192.168.1.1:8032
2022-02-27 15:17:21,731 INFO input.FileInputFormat: Total input files to process : 5
2022-02-27 15:17:21,761 INFO mapreduce.JobSubmitter: number of splits:5
2022-02-27 15:17:22,064 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1645945958451_0002
2022-02-27 15:17:22,064 INFO mapreduce.JobSubmitter: Executing with tokens: []
2022-02-27 15:17:22,241 INFO conf.Configuration: resource-types.xml not found
2022-02-27 15:17:22,242 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-02-27 15:17:22,498 INFO impl.YarnClientImpl: Submitted application application_1645945958451_0002
2022-02-27 15:17:22,539 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1645945958451_0002/
2022-02-27 15:17:22,540 INFO mapreduce.Job: Running job: job_1645945958451_0002
2022-02-27 15:17:27,644 INFO mapreduce.Job: Job job_1645945958451_0002 running in uber mode : false
2022-02-27 15:17:27,646 INFO mapreduce.Job: map 0% reduce 0%
Here is my yarn-site.xml configuration:
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>8</value>
</property>
<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>99.0</value>
</property>
<property>
<name>fs.AbstractFileSystem.madfs.impl</name>
<value>madfs.AbsXxxFS</value>
</property>
</configuration>
My mapred-site.xml configuration:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/home/xyz/hadoop-3.3.1</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/home/xyz/hadoop-3.3.1</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/home/xyz/hadoop-3.3.1</value>
</property>
</configuration>
P.s. there's no ERROR nor WARNING in the logs dumped by ResourceManager and NodeManager. And the states of RMAppAttemptImpl, RMAppImpl, RMContainerImpl, ContainerMonitorImpl and ApplicationImpl are all stuck at RUNNING.
Here's the ResourceManager log hadoop-resourcemanager.out, and here's the NodeManager log hadoop-nodemanager.out.
Update:
The problem seems to be the application failed to preempt the required resources...
The app status I got using yarn app -status <appId>:
Application Report :
Application-Id : application_1645945958451_0003
Application-Name : QuasiMonteCarlo
Application-Type : MAPREDUCE
User : xyz
Queue : default
Application Priority : 0
Start-Time : 1645976932440
Finish-Time : 0
Progress : 0%
State : RUNNING
Final-State : UNDEFINED
Tracking-URL : http://master:45299
RPC Port : 46709
AM Host : master
Aggregate Resource Allocation : 997310 MB-seconds, 486 vcore-seconds
Aggregate Resource Preempted : 0 MB-seconds, 0 vcore-seconds
Log Aggregation Status : DISABLED
Diagnostics :
Unmanaged Application : false
Application Node Label Expression : <Not set>
AM container Node Label Expression : <DEFAULT_PARTITION>
TimeoutType : LIFETIME ExpiryTime : UNLIMITED RemainingTime : -1seconds
Also, only 1 container is allocated to this job, does that has anything to do with the problem?

Unable to import data using scoop: exitCode=255

I am a noob in hadoop spark. I have setup a hadoop/spark cluster (1 namenode, 2 datanode). Now I am trying to import data from DB (mysql) using scoop in HDFS, but its failing always
16/07/27 16:50:04 INFO mapreduce.Job: Running job: job_1469629483256_0004
16/07/27 16:50:11 INFO mapreduce.Job: Job job_1469629483256_0004 running in uber mode : false
16/07/27 16:50:11 INFO mapreduce.Job: map 0% reduce 0%
16/07/27 16:50:13 INFO ipc.Client: Retrying connect to server: datanode1_hostname/172.31.58.123:59676. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/27 16:50:14 INFO ipc.Client: Retrying connect to server: datanode1_hostname/172.31.58.123:59676. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/27 16:50:15 INFO ipc.Client: Retrying connect to server: datanode1_hostname/172.31.58.123:59676. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/27 16:50:18 INFO mapreduce.Job: Job job_1469629483256_0004 failed with state FAILED due to: Application application_1469629483256_0004 failed 2 times due to AM Container for appattempt_1469629483256_0004_000002 exited with exitCode: 255
For more detailed output, check application tracking page:http://ip-172-31-55-182.ec2.internal:8088/cluster/app/application_1469629483256_0004Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1469629483256_0004_02_000001
Exit code: 255
Stack trace: ExitCodeException exitCode=255:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 255
Failing this attempt. Failing the application.
16/07/27 16:50:18 INFO mapreduce.Job: Counters: 0
16/07/27 16:50:18 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
16/07/27 16:50:18 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 16.2369 seconds (0 bytes/sec)
16/07/27 16:50:18 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/07/27 16:50:18 INFO mapreduce.ImportJobBase: Retrieved 0 records.
16/07/27 16:50:18 ERROR tool.ImportTool: Error during import: Import job failed!
I am able to manually write in HDFS:
hdfs dfs -put <local file path> <hdfs path>
But when i run scoop import command
sqoop import --connect jdbc:mysql://<host>/<db_name> --username <USERNAME> --password <PASSWORD> --table <TABLE_NAME> --enclosed-by '\"' --fields-terminated-by , --escaped-by \\ -m 1 --target-dir <hdfs location>
Can any one please tell me what I am doing wrong
Here is the list of things that I have already tried
Shutting down cluster, formatting HDFS, then restarting cluster (didn't help)
Made sure that HDFS is not in SAFE MODE
all the nodes have this in their /etc/hosts
127.0.0.1 localhost
172.31.55.182 namenode_hostname
172.31.58.123 datanode1_hostname
172.31.58.122 datanode2_hostname
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Configuration Files:
All Nodes: $HADOOP_CONF_DIR/core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ip-172-31-55-182.ec2.internal:9000</value>
</property>
</configuration>
All Nodes: $HADOOP_CONF_DIR/yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ip-172-31-55-182.ec2.internal</value>
</property>
</configuration>
All Nodes: $HADOOP_CONF_DIR/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.jobtracker.address</name>
<value>ip-172-31-55-182.ec2.internal:54311</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
NameNode Specific Configurations
$HADOOP_CONF_DIR/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///mnt/hadoop_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:50010</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:50475</value>
</property>
<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:50020</value>
</property>
</configuration>
$HADOOP_CONF_DIR/masters:
ip-172-31-55-182.ec2.internal
$HADOOP_CONF_DIR/slaves:
ip-172-31-58-123.ec2.internal
ip-172-31-58-122.ec2.internal
DataNode Specific Configurations
$HADOOP_CONF_DIR/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///mnt/hadoop_data/hdfs/datanode</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:50010</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:50475</value>
</property>
<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:50020</value>
</property>
</configuration>
From where u are trying to import the data. I mean from which machine you are trying to connect.check the master and slaves file in both namenode and datanode.
Try to ping the ip address from different server and check if it's showing as up.
Make these changes and restart your cluster, and try again:
Edit the part as mention in comment(#) below, and remove the comment
/etc/hosts file on client node:
127.0.0.1 localhost yourcomputername #get computername by "hostname -f" command and replace here
172.31.55.182 namenode_hostname ip-172-31-55-182.ec2.internal
172.31.58.123 datanode1_hostname ip-172-31-58-123.ec2.internal
172.31.58.122 datanode2_hostname ip-172-31-58-122.ec2.internal
/etc/hosts file on cluster nodes:
198.22.23.212 youcomputername #change to public ip of client node, change computername same as client node
172.31.55.182 namenode_hostname ip-172-31-55-182.ec2.internal
172.31.58.123 datanode1_hostname ip-172-31-58-123.ec2.internal
172.31.58.122 datanode2_hostname ip-172-31-58-122.ec2.internal
I am terminating this cluster and starting from scratch.

Mapreduce job ipc.Client retrying to connect

I am testing my hadoop cluster which consists of 4 docker containers:
Datanode
Secondary Namenode
Namenode
Resource Manager
When I submit a map reduce job I notice connection issues once both map and reduce are at 100%. This then reaches the maximum number of re-tries before erroring and providing a stack trace. The weird thing is that the job finishes and provides an answer. However the node manager web interface shows a failed job. None of the question/answers I have found so far fix my particular issue.
All my machines have exposed the port range 50100:50200 to comply with the 'yarn.app.mapreduce.am.job.client.port-range' property.
The job I submit is
sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.7.1.jar pi 1 1
This is the output:
Number of Maps = 1
Samples per Map = 1
Wrote input for Map #0
Starting Job
16/06/18 19:14:07 INFO client.RMProxy: Connecting to ResourceManager at resource-manager/172.19.0.2:8032
16/06/18 19:14:08 INFO input.FileInputFormat: Total input paths to process : 1
16/06/18 19:14:08 INFO mapreduce.JobSubmitter: number of splits:1
16/06/18 19:14:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1466277178029_0001
16/06/18 19:14:08 INFO impl.YarnClientImpl: Submitted application application_1466277178029_0001
16/06/18 19:14:08 INFO mapreduce.Job: The url to track the job: http://resource-manager:8088/proxy/application_1466277178029_0001/
16/06/18 19:14:08 INFO mapreduce.Job: Running job: job_1466277178029_0001
16/06/18 19:14:15 INFO mapreduce.Job: Job job_1466277178029_0001 running in uber mode : false
16/06/18 19:14:15 INFO mapreduce.Job: map 0% reduce 0%
16/06/18 19:14:19 INFO mapreduce.Job: map 100% reduce 0%
16/06/18 19:14:26 INFO mapreduce.Job: map 100% reduce 100%
16/06/18 19:14:32 INFO ipc.Client: Retrying connect to server: 01d3c03f829a/172.19.0.4:50100. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/06/18 19:14:33 INFO ipc.Client: Retrying connect to server: 01d3c03f829a/172.19.0.4:50100. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/06/18 19:14:34 INFO ipc.Client: Retrying connect to server: 01d3c03f829a/172.19.0.4:50100. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/06/18 19:14:36 INFO mapreduce.Job: map 0% reduce 0%
16/06/18 19:14:36 INFO mapreduce.Job: Job job_1466277178029_0001 failed with state FAILED due to: Application application_1466277178029_0001 failed 2 times due to AM Container for appattempt_1466277178029_0001_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://resource-manager:8088/proxy/application_1466277178029_0001/AThen, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466277178029_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
at org.apache.hadoop.util.Shell.run(Shell.java:478)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
16/06/18 19:14:36 INFO mapreduce.Job: Counters: 0
Job Finished in 28.862 seconds
Estimated value of Pi is 4.00000000000000000000
the container log has the following:
2016-06-18 19:14:32,273 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1466277178029_0001_000002
2016-06-18 19:14:32,443 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-06-18 19:14:32,475 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-06-18 19:14:32,477 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier#3514a4c0)
2016-06-18 19:14:32,515 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2016-06-18 19:14:33,060 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Attempt num: 2 is last retry: true because a commit was started.
2016-06-18 19:14:33,061 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$NoopEventHandler
2016-06-18 19:14:33,067 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-06-18 19:14:33,068 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-06-18 19:14:33,118 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system is set solely by core-default.xml therefore - ignoring
2016-06-18 19:14:33,141 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system is set solely by core-default.xml therefore - ignoring
2016-06-18 19:14:33,162 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system is set solely by core-default.xml therefore - ignoring
2016-06-18 19:14:33,183 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2016-06-18 19:14:33,185 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Will not try to recover. recoveryEnabled: true recoverySupportedByCommitter: false numReduceTasks: 1 shuffleKeyValidForRecovery: true ApplicationAttemptID: 2
2016-06-18 19:14:33,210 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system is set solely by core-default.xml therefore - ignoring
2016-06-18 19:14:33,212 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous history file is at hdfs://namenode:9000/user/hdfs/.staging/job_1466277178029_0001/job_1466277178029_0001_1.jhist
2016-06-18 19:14:33,621 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2016-06-18 19:14:33,640 WARN [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-mrappmaster.properties,hadoop-metrics2.properties
2016-06-18 19:14:33,689 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-06-18 19:14:33,689 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2016-06-18 19:14:33,708 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2016-06-18 19:14:33,708 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2016-06-18 19:14:33,708 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2016-06-18 19:14:33,739 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at resource-manager/172.19.0.2:8030
2016-06-18 19:14:33,814 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:4096, vCores:4>
2016-06-18 19:14:33,814 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.hdfs
2016-06-18 19:14:33,837 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system is set solely by core-default.xml therefore - ignoring
2016-06-18 19:14:33,840 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryCopyService: History file is at hdfs://namenode:9000/user/hdfs/.staging/job_1466277178029_0001/job_1466277178029_0001_1.jhist
2016-06-18 19:14:33,894 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1466277178029_0001, File: hdfs://namenode:9000/user/hdfs/.staging/job_1466277178029_0001/job_1466277178029_0001_2.jhist
2016-06-18 19:14:33,959 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: Was asked to shut down.
2016-06-18 19:14:33,959 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.io.IOException: Was asked to shut down.
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1546)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1540)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1473)
2016-06-18 19:14:33,962 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1
A few times it says 'Cannot locate configuration' or 'Default file system is set solely by core-default.xml'. Is this significant? In case this changes anything I am using the cloudera repo to install various hadoop services instead of unpacking a .tar.gz.
My config files are:
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:9000</value>
</property>
<property>
<name>hadoop.proxyuser.mapred.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.mapred.hosts</name>
<value>*</value>
</property>
</configuration>
yar-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>resource-manager</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>resource-manager:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>resource-manager:8030</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs</value>
</property>
<property>
<name>yarn.log.aggregation-enable</name>
<value>true</value>
</property>
<property>
<description>Where to aggregate logs</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://namenode:8020/var/log/hadoop-yarn/apps</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>resource-manager:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>resource-manager:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>resource-manager:8033</value>
</property>
<property>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>600</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
<description>Amount of physical memory, in MB, that can be allocated for containers.</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1000</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>namenode:8021</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>history-server:10020</value>
<description>Enter your JobHistoryServer hostname.</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>history-server:19888</value>
<description>Enter your JobHistoryServer hostname.</description>
</property>
<property>
<name>yarn.app.mapreduce.am.job.client.port-range</name>
<value>50100-50200</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hadoop</value>
</property>
<property>
<name>dfs.name.dir or dfs.namenode.name.dir</name>
<value>file:///data/1/dfs/nn,file:///nfsmount/dfs/nn</value>
</property>
<property>
<name>dfs.data.dir or dfs.datanode.data.dir</name>
<value>file:///data/1/dfs/dn,file:///data/2/dfs/dn,file:///data/3/dfs/dn,file:///data/4/dfs/dn</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>namenode:50070</value>
<description>
The address and the base port on which the dfs NameNode Web UI will listen.
</description>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
Thanks for reading.
For anyone who has the same issue the solution is to add the following to the hdfs-site.xml:
<property>
<name>dfs.safemode.threshold.pct</name>
<value>0</value>
</property>

Need help in resolving following error on Hadoop 2.6

Command executed:
hadoop jar /usr/local/hadoop-2.6.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar -input /user/hduser/samples/x.txt -output /user/hduser/samples/hadoop_output_data1 -mapper mapper2.py -reducer reducer2.py -file mapper2.py -file reducer2.py
15/10/05 17:37:04 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
packageJobJar: [mapper2.py, reducer2.py] [] /tmp/streamjob1958064018029021376.jar tmpDir=null
15/10/05 17:37:04 INFO client.RMProxy: Connecting to ResourceManager at Hadoop1/192.168.10.2:8050
15/10/05 17:37:04 INFO client.RMProxy: Connecting to ResourceManager at Hadoop1/192.168.10.2:8050
15/10/05 17:37:06 INFO mapred.FileInputFormat: Total input paths to process : 1
15/10/05 17:37:07 INFO mapreduce.JobSubmitter: number of splits:2
15/10/05 17:37:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1444080901412_0001
15/10/05 17:37:08 INFO impl.YarnClientImpl: Submitted application application_1444080901412_0001
15/10/05 17:37:08 INFO mapreduce.Job: The url to track the job: http://Hadoop1:8088/proxy/application_1444080901412_0001/
15/10/05 17:37:08 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-hduser/mapred/local]
15/10/05 17:37:08 INFO streaming.StreamJob: Running job: job_1444080901412_0001
15/10/05 17:37:08 INFO streaming.StreamJob: Job running in-process (local Hadoop)
15/10/05 17:37:09 INFO streaming.StreamJob: map 0% reduce 0%
15/10/05 17:37:45 INFO streaming.StreamJob: map 100% reduce 100%
15/10/05 17:37:46 INFO streaming.StreamJob: Job running in-process (local Hadoop)
15/10/05 17:37:46 ERROR streaming.StreamJob: Job not successful. Error: Task failed task_1444080901412_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
15/10/05 17:37:46 INFO streaming.StreamJob: killJob...
15/10/05 17:37:46 INFO impl.YarnClientImpl: Killed application application_1444080901412_0001
Streaming Command Failed!
mapper file :
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split(',')
if len(words) == 6:
key = (',').join([words[0], words[1], words[2], words[3],words[4]])
value = (',').join([words[5], "1"])
print '%s\t%s'%(key, value)
Sample Input File:
10.10.1.22,10.10.1.13,0,23772,6,9900
10.10.1.12,10.10.1.21,55570,0,6,9900
10.10.1.22,10.10.1.13,0,24028,6,9900
10.10.1.21,10.10.1.12,0,46864,6,9900
10.10.1.12,10.10.1.21,56594,0,6,9900
10.10.1.22,10.10.1.13,0,25308,6,9900
10.10.1.12,10.10.1.21,57618,0,6,9900
10.10.1.21,10.10.1.12,0,48144,6,9900
10.10.1.22,10.10.1.13,0,25564,6,9900
10.10.1.12,10.10.1.21,58642,0,6,9900
10.10.1.22,10.10.1.13,0,26844,6,9900
10.10.1.21,10.10.1.12,0,48400,6,9900
10.10.1.12,10.10.1.21,59410,0,6,9900
mapred-site.xml
<configuration>
<property>
<name>mapreduce.job.tracker</name>
<value>Hadoop1:54311</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>100</value>
</property>
</configuration>
hdfs-site.xml for namenode
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop-2.6.0/hadoop_data/hdfs/namenode</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>Hadoop1:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>Hadoop1:8035</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>Hadoop1:8050</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://Hadoop1:9000</value>
</property>
</configuration>

Resources