I need to disable parallel execution of YARN applications in hadoop cluster. Now, YARN has default settings, so several jobs can run in parallel. I see no advantages of this, because both jobs run slower.
I found this setting yarn.scheduler.capacity.maximum-applications which limits maximum number of applications, but it affects both submitted and running apps (as stated in docs). I'd like to keep submitted apps in queue until current running application is not finished. How can this be done?
1) Change Scheduler to FairScheduler
Hadoop distributions use CapacityScheduler by default (Cloudera uses FairScheduler as default Scheduler). Add this property to yarn-site.xml
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>
2) Set default Queue
Fair Scheduler creates a queue per user. I.E., if three different users submit jobs then three individual queues will be created and the resources will be shared among the three queues. Disable it by adding this property in yarn-site.xml
<property>
<name>yarn.scheduler.fair.user-as-default-queue</name>
<value>false</value>
</property>
This assures that all the jobs go into a single default queue.
3) Restrict Maximum Applications
Now that the job queue has been limited to one default queue. Restrict the maximum number of applications to 1 that can be run in that queue.
Create a file named fair-scheduler.xml under the $HADOOP_CONF_DIR and add these entries
<allocations>
<queueMaxAppsDefault>1</queueMaxAppsDefault>
</allocations>
Also, add this property in yarn-site.xml
<property>
<name>yarn.scheduler.fair.allocation.file</name>
<value>$HADOOP_CONF_DIR/fair-scheduler.xml</value>
</property>
Restart YARN services after adding these properties.
On submitting multiple applications, the application ACCEPTED first will be considered as the Active application and the remaining will be queued as Pending applications. These pending applications will continue to be in ACCEPTED state until the RUNNING application is FINISHED. The Active application will be allowed to utilise all the available resources.
Reference: Hadoop: Fair Scheduler
As per my understanding about your question. I see, the above code line/setting only may not help you. Can you check below code with your existing setup, it may give you some solution.
<allocations>
<defaultQueueSchedulingPolicy>fair</defaultQueueSchedulingPolicy>
<queue name="<<Your Queue Name>>"
<weight>40</weight>
<schedulingPolicy>fifo</schedulingPolicy>
</queue>
<queue name=<<Your Queue Name>>>
<weight>60</weight>
<queue name=<<Your Queue Name>> />
<queue name=<<Your Queue Name>> />
</queue>
<queuePlacementPolicy>
<rule name="specified" create="false" />
<rule name="primaryGroup" create="false" />
<rule name="default" queue=<<Your Queue Name>> />
</queuePlacementPolicy>
</allocations>
Related
I am using Hadoop 2.9.0. Is it possible to submit jobs with different priorities in YARN? According to some JIRA tickets it seems that application priorities have now been implemented.
I tried using the YarnClient, and setting a priority to the ApplicationSubmissionContext before submitting the job. I also tried using the CLI and using updateApplicationPriority. However, nothing seems to be changing the application priority, it always remains 0.
Have I misunderstood the concept of ApplicationPriority for YARN? I saw some documentation about setting priorities to queues, but for my use case I need all jobs in one queue.
Will appreciate any clarification on my understanding, or suggestions about what I could be doing wrong.
Thanks.
Yes, it is possible to set priority of your applications on the yarn cluster.
Leaf Queue-level priority
You can define queues with different priority and use spark-submit to submit your application to the specific queue with the wanted priority.
Basically you can define your queues in etc/hadoop/capacity-scheduler.xml like this:
<property>
<name>yarn.scheduler.capacity.root.prod.queues</name>
<value>prod1,prod2</value>
<description>Production queues.</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.test.queues</name>
<value>test1,test2</value>
<description>Test queues.</description>
</property>
See documentation of queue properties here
Note: Application priority works only along with FIFO ordering policy.
Default ordering policy is FIFO.
In order to set application priority you can add properties like this to the same file:
<property>
<name>yarn.scheduler.capacity.root.test.default-application-priority</name>
<value>10</value>
<description>Test queues have low priority.</description>
</property>
<property>
<name>yarn.scheduler.capacity.root.prod.default-application-priority</name>
<value>90</value>
<description>Production queues have high priority.</description>
</property>
See more information about application priority here
Changing application priority at runtime:
If you want to change application priority at runtime you can also use the CLI like this:
yarn application -appId <ApplicationId> -updatePriority <Priority>
Can you share what command you execute on what node and what response you get?
See more info here
Using YarnClient
You did not share your code so it is difficult to see if you do it right. But it is possible to submit a new application with a specific priority using YarnClient
ApplicationClientProtocol.submitApplication(SubmitApplicationRequest)
See more info here
Below is my scheduler xml file, i restricted access to root queues, where as dev2, qa2 users should submit to their queues only. But i can submit jobs to QA queue as dev2 user also, this should not happen , I have also modified accordingly in RANGER YARN policies & disabled super policy that had all queues access to all users, please advice me .
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.maximum-capacity=40
yarn.scheduler.capacity.root.default.capacity=40
yarn.scheduler.capacity.root.default.acl_submit_applications=
yarn.scheduler.capacity.root.default.acl_administer_jobs=
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.acl_administer_queue=
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.default.minimum-user-limit-percent=100
capacity-scheduler=null
yarn.scheduler.capacity.root.queues=dev,qa,default
yarn.scheduler.capacity.root.acl_administer_jobs=
yarn.scheduler.capacity.root.default.acl_administer_queue=
yarn.scheduler.capacity.root.default.user-limit=1
yarn.scheduler.capacity.root.dev.acl_submit_applications=dev2
yarn.scheduler.capacity.root.dev.capacity=30
yarn.scheduler.capacity.root.dev.maximum-capacity=30
yarn.scheduler.capacity.root.dev.user-limit=1
yarn.scheduler.capacity.root.qa.acl_submit_applications=qa2
yarn.scheduler.capacity.root.qa.capacity=30
yarn.scheduler.capacity.root.qa.maximum-capacity=30
yarn.scheduler.capacity.root.qa.user-limit=1
You are missing out the property that blocks access to root queue.
Here root is the parent queue for both dev and qa child queues. The access to this queue is not restricted, thus all users and groups and have access to this queue and to its child queues.
Add this property to the capacity-scheduler.xml,
<property>
<name>yarn.scheduler.capacity.root.acl_submit_applications</name>
<value> </value>
</property>
This blocks access to root queue to all users and groups, then the acls provided for the child queues will be restrictive as defined.
As FIFO has been default scheduler in hadoop 1.2.1, where exactly do i need to make changes to change default scheduler from FIFO to capacity or fair. I had recently checked mapred-default.xml which is present inside hadoop-core-1.2.1.jar as directed in this answer but i didnt get where to hit and change the scheduling criteria. Please provide guidance thanking in advance
where exactly do i need to make changes to change default scheduler from FIFO to capacity or fair
In the mapred-site.xml
Fair Scheduler
<property>
<name>mapred.jobtracker.taskScheduler</name>
<value>org.apache.hadoop.mapred.FairScheduler</value>
</property>
Capacity Scheduler
<property>
<name>mapred.jobtracker.taskScheduler</name>
<value>org.apache.hadoop.mapred.CapacityTaskScheduler</value>
</property>
Note, you may want to actually read the documentation from those links because they tell you how to set them up in detail.
Im configuring Hadoop 2.2.0 stable release with HA namenode but i dont know how to configure remote access to the cluster.
I have HA namenode configured with manual failover and i defined dfs.nameservices and i can access hdfs with nameservice from all the nodes included in the cluster, but not from outside.
I can perform operations on hdfs by contact directly the active namenode, but i dont want that, i want to contact the cluster and then be redirected to the active namenode. I think this is the normal configuration for a HA cluster.
Does anyone now how to do that?
(thanks in advance...)
You have to add more values to the hdfs site:
<property>
<name>dfs.ha.namenodes.myns</name>
<value>machine-98,machine-99</value>
</property>
<property>
<name>dfs.namenode.rpc-address.myns.machine-98</name>
<value>machine-98:8100</value>
</property>
<property>
<name>dfs.namenode.rpc-address.myns.machine-99</name>
<value>machine-145:8100</value>
</property>
<property>
<name>dfs.namenode.http-address.myns.machine-98</name>
<value>machine-98:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.myns.machine-99</name>
<value>machine-145:50070</value>
</property>
You need to contact one of the Name nodes (as you're currently doing) - there is no cluster node to contact.
The hadoop client code knows the address of the two namenodes (in core-site.xml) and can identity which is the active and which is the standby. There might be a way by which you can interrogate a zookeeper node in the quorum to identify the active / standby (maybe, i'm not sure) but you might as well check one of the namenodes - you have a 50/50 chance it's the active one.
I'd have to check, but you might be able to query either if you're just reading from HDFS.
for Active Name node you can always ask Zookeeper.
you can get the active name node from the below Zk Path.
/hadoop-ha/namenodelogicalname/ActiveStandbyElectorLock
There are two ways to resolve this situation(code with java)
use core-site.xml and hdfs-site.xml in your code
load conf via addResource
use conf.set in your code
set hadoop conf via conf.set
an example use conf.set
I'm exploring the options for running a hadoop application on a local system.
As with many applications the first few releases should be able to run on a single node, as long as we can use all the available CPU cores (Yes, this is related to this question). The current limitation is that on our production systems we have Java 1.5 and as such we are bound to Hadoop 0.18.3 as the latest release (See this question). So unfortunately we can't use this new feature yet.
The first option is to simply run hadoop in pseudo distributed mode. Essentially: create a complete hadoop cluster with everything on it running on exactly 1 node.
The "downside" of this form is that it also uses a full fledged HDFS. This means that in order to process the input data this must first be "uploaded" onto the DFS ... which is locally stored. So this takes additional transfer time of both the input and output data and uses additional disk space. I would like to avoid both of these while we stay on a single node configuration.
So I was thinking: Is it possible to override the "fs.hdfs.impl" setting and change it from "org.apache.hadoop.dfs.DistributedFileSystem" into (for example) "org.apache.hadoop.fs.LocalFileSystem"?
If this works the "local" hadoop cluster (which can ONLY consist of ONE node) can use existing files without any additional storage requirements and it can start quicker because there is no need to upload the files. I would expect to still have a job and task tracker and perhaps also a namenode to control the whole thing.
Has anyone tried this before?
Can it work or is this idea much too far off the intended use?
Or is there a better way of getting the same effect: Pseudo-Distributed operation without HDFS?
Thanks for your insights.
EDIT 2:
This is the config I created for hadoop 0.18.3
conf/hadoop-site.xml using the answer provided by bajafresh4life.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>file:///</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:33301</value>
</property>
<property>
<name>mapred.job.tracker.http.address</name>
<value>localhost:33302</value>
<description>
The job tracker http server address and port the server will listen on.
If the port is 0 then the server will start on a free port.
</description>
</property>
<property>
<name>mapred.task.tracker.http.address</name>
<value>localhost:33303</value>
<description>
The task tracker http server address and port.
If the port is 0 then the server will start on a free port.
</description>
</property>
</configuration>
Yes, this is possible, although I'm using 0.19.2. I'm not too familiar with 0.18.3, but I'm pretty sure it shouldn't make a difference.
Just make sure that fs.default.name is set to the default (which is file:///), and mapred.job.tracker is set to point to where your jobtracker is hosted. Then start up your daemons using bin/start-mapred.sh . You don't need to start up the namenode or datanodes. At this point you should be able to run your map/reduce jobs using bin/hadoop jar ...
We've used this configuration to run Hadoop over a small cluster of machines using a Netapp appliance mounted over NFS.