Apache Mesos Schedulers and Executors by example - mesos

I am trying to understand how the various components of Mesos work together, and found this excellent tutorial that contains the following architectural overview:
I have a few concerns about this that aren't made clear (either in the article or in the official Mesos docs):
Where are the Schedulers running? Are there "Scheduler nodes" where only the Schedulers should be running?
If I was writing my own Mesos framework, what Scheduler functionality would I need to implement? Is it just a binary yes/no or accept/reject for Offers sent by the Master? Any concrete examples?
If I was writing my own Mesos framework, what Executor functionality would I need to implement? Any concrete examples?
What's a concrete example of a Task that would be sent to an Executor?
Are Executors "pinned" (permanently installed on) Slaves, or do they float around in an "on demand" type fashion, being installed and executed dynamically/on-the-fly?

Great questions!
I believe it would be really helpful to have a look at a sample framework such as Rendler. This will probably answer most of your question and give you feeling for the framework internal.
Let me now try to answer the question which might be still be open after this.
Scheduler Location
Schedulers are not on on any special nodes, but keep in mind that schedulers can failover as well (as any part in a distributed system).
Scheduler functionality
Have a look at Rendler or at the framework development guide.
Executor functionality/Task
I believe Rendler is a good example to understand the Task/Executor relationship. Just start reading the README/description on the main github page.
Executor pinning
Executors are started on each node when the first Task requiring such executor is send to this node. After this it will remain on that node.
Hope this helped!

To add to js84's excellent response,
Scheduler Location: Many users like to launch the schedulers via another framework like Marathon to ensure that if the scheduler or its node dies, then it can be restarted elsewhere.
Scheduler functionality: After registering with Mesos, your scheduler will start getting resource offers in the resourceOffers() callback, in which your scheduler should launch (at least) one task on a subset (or all) of the resources being offered. You'll probably also want to implement the statusUpdate() callback to handle task completion/failure.
Note that you may not even need to implement your own scheduler if an existing framework like Marathon/Chronos/Aurora/Kubernetes could suffice.
Executor functionality: You usually don't need to create a custom executor if you just want to launch a linux process or docker container and know when it completes. You could just use the default mesos-executor (by specifying a CommandInfo directly in TaskInfo, instead of embedded inside an ExecutorInfo). If, however you want to build a custom executor, at minimum you need to implement launchTask(), and ideally also killTask().
Example Task: An example task could be a simple linux command like sleep 1000 or echo "Hello World", or a docker container (via ContainerInfo) like image : 'mysql'. Or, if you use a custom executor, then the executor defines what a task is and how to run it, so a task could instead be run as another thread in the executor's process, or just become an item in a queue in a single-threaded executor.
Executor pinning: The executor is distributed via CommandInfo URIs, just like any task binaries, so they do not need to be preinstalled on the nodes. Mesos will fetch and run it for you.

Schedulers: are some strategy to accept or reject the offer. Schedulers we can write our own or we can use some existing one like chronos. In scheduler we should evaluate the resources available and then either accept or reject.
Scheduler functionality: Example could be like suppose say u have a task which needs 8 cpus to run, but the offer from mesos may be 6 cpus which won't serve the need in this case u can reject.
Executor functionality : Executor handles state related information of your task. Set of APIs you need to implement like what is the status of assigned task in mesos slave. What is the num of cpus currently available in mesos slave where executor is running.
concrete example for executor : chronos
being installed and executed dynamically/on-the-fly : These are not possible, you need to pre configure the executors. However you can replicate the executors using autoscaling.

Related

Deploy 2 different topologies on a single Nimbus with 2 different hardware

I have 2 sets of storm topologies in use today, one is up 24/7, and does it's own work.
The other, is deployed on demand, and handles a much bigger loads of data.
As of today, we have N supervisors instances, all from the same type of hardware (CPU/RAM), I'd like my on demand topology to run on stronger hardware, but as far as I know, there's no way to control which supervisor is assigned to which topology.
So if I can't control it, it's possible that the 24/7 topology would assign one of the stronger workers to itself.
Any ideas, if there is such a way?
Thanks in advance
Yes, you can control which topologies go where. This is the job of the scheduler.
You very likely want either the isolation scheduler or the resource aware scheduler. See https://storm.apache.org/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html and https://storm.apache.org/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html.
The isolation scheduler lets you prevent Storm from running any other topologies on the machines you use to run the on demand topology. The resource aware scheduler would let you set the resource requirements for the on demand topology, and preferentially assign the strong machines to the on demand topology. See the priority section at https://storm.apache.org/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html#Topology-Priorities-and-Per-User-Resource.

Why are mapreduce attempts killed due to "Container preempted by scheduler"?

I just noticed the fact that many Pig jobs on Hadoop are killed due to the following reason: Container preempted by scheduler
Could someone explain me what causes this, and if I should (and am able to) do something about this?
Thanks!
If you have the fair scheduler and a number of different queue's enabled, then higher priority applications can terminate your jobs (in a preemptive fashion).
Hortonworks have a pretty good explanation with more details
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_yarn_resource_mgt/content/preemption.html
Should you do anything about it? Depends if your application is within its SLA's and performing within expectations. General good practice would be to review your job priority and the queue it's assigned to.
If your Hadoop cluster is being used by many business units. then Admins decides queue for them and every queue has its priorities( that too is decided by Admins). If Preemption is enabled at scheduler level,then higher-priority applications do not have to wait because lower priority applications have taken up the available capacity. So in this case lower propriety task must have to release resources, if not available at cluster to let run higher-priority applications.

Confused on Mesos Terminologies

I went through the video on introduction of DCOS. It was good but got me somewhat confused in terms of classification of component definitions in Mesosphere.
I get that DCOS is an ecosystem and Mesos is like a kernel. Please correct me if I am wrong. For eg. It's like Ubuntu and Linux kernel I presume.
What is marathon? Is it a service or framework or is it something else that falls in neither category? I am bit confused in terms of service vs framework vs application vs Task definition in Mesosphere's context.
Are the services(Cassandra, HDFS, Kubernetes, etc..) that he launches in the video can safely be also called as frameworks?
From 3, are these "services" running as executors in the slaves?
What should rails-app's type be here? Is it a task? So will it also have an executor?
Who makes the decision of autoscaling the rails-app to more nodes, when he increases the traffic using marathon.
1) I get that DCOS is an ecosystem and Mesos is like a kernel. Please
correct me if I am wrong. For eg. It's like Ubuntu and Linux kernel I
presume.
Correct!
2) What is marathon? Is it a service or framework or is it something
else that falls in neither category? I am bit confused in terms of
service vs framework vs application vs Task definition in Mesosphere's
context.
In Apache Mesos terminology, Marathon is a framework. Every framework consists of a framework scheduler and an executor. Many frameworks reuse the standard executor rather than providing their own. An app is a Marathon specific term, meaning the long-running task you launch through it. A task is the unit of execution, running on a Mesos agent (in an executor). In DC/OS (the product, Mesosphere is our company) we call frameworks in general services. Also, in the context of DC/OS, Marathon plays a special role: it acts as a sort of distributed initd, launching other services such as Spark or Kafka.
3) Are the services(Cassandra, HDFS, Kubernetes, etc..) that he
launches in the video can safely be also called as frameworks?
See above.
4) From 3), are these "services" running as executors in the slaves?
No. See above.
5) What should rails-app's type be here? Is it a task? So will it also
have an executor?
The Rails app may have one or more (Mesos) tasks running in executors on one or more agents.
6) Who makes the decision of autoscaling the rails-app to more nodes,
when he increases the traffic using marathon.
Not nodes but instances of the app. Also as #air suggested, with Marathon autoscaling is simple, see also this autoscaling example.

Mesos scheduling - how this works?

I am trying to figure out how to use Mesos.
I have running mesos master and slave running (in single-node setup).
And I have understood that framework listens for resource offers and accepts theme if he can, and after that it goes to the executor to execute the task.
How I can send to mesos "Hi, I want to execute some task with 1 cpu and 256 mb", who's task is? the framework? or there is another api for doing this?
Yosi
I think I understood it finally after a debugging session :)
The framework gets a resource offer, when it get a resource offer he checks whether he have a task to launch the matches the resource offer - If so, he runs the task.
I thought there was an external service that I need to call and it will initiate a resource offer.

Spring quartz/cron jobs in a distributed environment

I have a fleet of about 5 servers. I want to run an identical Spring/Tomcat app on each machine.
I also need a particular task to be executed every ten minutes. It should only run on one of the machines. I need some sort of election protocol or other similar solution.
Does Spring or Quartz have any sort of built-in distributed cron solution, or do I need to implement something myself?
Hazelcast has a distributed executor framework which you can use to run jobs using the JDK Executor framework (which, by the way, is possibly more testable than horrid Quartz... maybe). It has a number of modes of operation, including having it pick a single node "at random" to execute your job on.
See the documentation for more details

Resources