Specifying resource requirements for chronos jobs - mesos

Is it possible to specify resource requirements (cpu, mem, ...) when scheduling a job in chronos via the REST API? I found there are configuration options that allow specifying general resource requirements for each task but I wonder whether it is possible to do this per job.

Generally it's possible to restrict resources per task, but you have to use cgroups isolation on mesos slaves. However it seems that Chronos API doesn't support it yet (see github issue for more details). Mesos is being developed quite rapidly, be sure to check that it is supported in your version.

Related

What is the difference between Apache Mesos and Nomad?

What is the difference between Apache Mesos and Nomad?
Nomad seems to claim that it can do resource management so I wonder how that is different from Apache Mesos?
Nomad also claims the following on their website
Nomad is architecturally much simpler. Nomad is a single binary, both for clients and servers, and requires no external services for coordination or storage. Nomad combines features of both resource managers and schedulers into a single system. This makes Nomad operationally simpler and enables more sophisticated optimizations.
Well for anyone to bring up a cluster resource management alone is not enough. so for that Nomad obviously recommends to buy into rest of the HashiCorp products so I am not sure how it is architecturally simpler when one have to integrate with pretty much all of their products that are needed for a fully functional cluster?
Mesos does not support federation or multiple failure isolation regions. Nomad supports multi-datacenter and multi-region configurations for failure isolation and scalability.
Not sure if this is still true for Apache Mesos?
Nomad is currently advertised as an orchestrator for orchestrators.
Nomad only aims to provide cluster management and scheduling and is designed with the Unix philosophy of having a small scope while composing with tools like Consul for service discovery and Vault for secret management.
On the other hand, Mesos is more a framework for building distributed systems than just the container orchestrator. Of course, you can use it that way but it's only a minority of its features and not take full use of its two-level scheduling design.
Nomad is architecturally much simpler. Nomad is a single binary, both for clients and servers, and requires no external services for coordination or storage. Nomad combines a lightweight resource manager and a sophisticated scheduler into a single system. By default, Nomad is distributed, highly available, and operationally simple.
Mesos architecture is not that simple. It's multiple binaries project. Definietly not easy to set up and run. Multiple moving parts are always more complicated to setup than a monolith but enables customization.
Mesos does not support federation or multiple failure isolation regions. Nomad supports multi-datacenter and multi-region configurations for failure isolation and scalability.
That's true. Ther are some works to bring federations to Mesos but it's not done yet. https://youtu.be/kqyVQzwwD5E
Mesos and Nomad are created for sligthly different purpose. Although both of them are n-th level orchestrator and could be run one on another and probably could deliver similar features. Nomad is designed just to run simple stateless applications while Mesos alow pluging custom schedulers and make fine grained control of what/when/where is deployed.

Why does Mesos offer resources?

What is the significance of the decision in Mesos for frameworks to be offered resources by Mesos? This seems to be mentioned a lot, but ultimately all of the logic is in the Mesos allocation module, so whether it's Mesos making and revoking offers, or frameworks asking for resources, is this just a semantic difference?
Interesting question:
The original Mesos paper states the following rationale:
The master implements fine-grained sharing across
frameworks using resource offers. Each resource offer
is a list of free resources on multiple slaves. The master
decides how many resources to offer to each framework
according to an organizational policy
Frameworks requesting would have the following consequences:
Frameworks would have to be aware of resources in the cluster (e.g., does the cluster have GPUs)
The logic of choosing the request (by a framework) which should be granted given fairness and existing free cluster resources seems more complex and less scalable than the current allocation mechanisms (Not having hard evidence here, but just a feeling after having touched the Mesos allocator code)
Maybe most interestingly the Mesos Scheduler interface includes a requestResources(const std::vector& requests) call. The default Mesos DRF allocator does not implement this call, but nothing prevents you from implementing an allocator which does so.
If you are interested in more details about cluster scheduler I can recommend this blog post or the Omega paper.
Update:
This MesosCon talk discusses some future extensions to more optimistic offers: http://schd.ws/hosted_files/mesosconna2016/51/MesosCon_2016_OptimisticOffer.pdf

how to list resource offers from a particular mesos slave

Is there a way to figure out all the resource offers from a particular slave? To give you the context some of my slaves which have a specific tag attached to them are not making any offers though in mesos UI I see they are less than 50% loaded. I want to debug the root cause and for the same need a way to figure out what offers are flowing from slave to master to framework. Framework in my case is marathon.

How many Marathon clusters / frameworks can Mesos handle?

Marathon does not support task configuration template which can establish command patterns and avoid redundancy. We are trying to find a way around it, otherwise we need to create 100,000s tasks and it would be very difficult to manage those config files. One approach, we are thinking is running multiple marathon clusters inside mesos. Now the question is can we run multiple marathon clusters inside mesos? And is there a limit number of frameworks mesos can handle?
Yes, multiple Marathon frameworks is not only possible but actually considered a best practices. There are many use cases, from scaling to Chinese Wall setups (esp. in financial services area).
For example, in the DCOS we're installing a 'system Marathon' per default and you can then install as many 'application' or 'project' or 'group' Marathons as you like.
I'm not aware of a theoretical limit of the number of frameworks, but hey, this might actually be a good load test to run, I'll look into it.

Apache Aurora GPU Resources

I am checking out Apache Aurora with the scope of running scientific workflows (assuming a set of python scripts in a particular sequence). I've successfully managed to run a few of these aurora Jobs, and it looks great for my particular use-case.
I was wondering if there is a way to specify that a particular task (or job, in general) requires a number of GPU resources from my Apache Mesos cluster Of course Mesos needs to be aware of the GPU resources first, and it seems this is possible by defining these GPU resources as indicated here.
So the question is whether there is a way to communicate with Mesos via Aurora to accept offers with GPU resources available. As far as I can tell, the Resource object in Aurora is limited to CPU/Ram/Disk resources. Any hints are greatly appreciated.
Thanks!
I'm not familiar with Apache Aurora, but Mesosphere Marathon (a framework similar to Aurora in functionality) is limited to cpu, mem, and disk resources as well.
If you would like to use custom resources, you would probably need to write your own framework. Depending on your needs it may not be that difficult. For inspiration, check the RENDLER framework.
As mentioned in the thread you are referencing to, Mesos do not provide isolation for GPU (actually, for any custom) resources. Keep this is in mind when doing resource math.
When checking the Aurora tutorial I assume you can just specify this ressource as part of you job description:
resources = Resources(cpu = 2, ram = 4*GB, disk = 8*GB, gpu = 1),
Just keep in mind that this is in artificial resource for Mesos, so Mesos will not take care of resource isolation in this case. For example if you have several GPUs on one system, your code would have to manage the isolation/scheduling between the different GPUs.

Resources