Can I run mesos/marathon application at specific host? - mesos

I wanna use marathon as cluster monitoring and management. Bellow scenario is possible?
My Scenario
Cassandra 5EA was already deployed and are running.
Cassandra hosts are physical machine.
I want to run script that verifies healthness of cassandra each host. ex) cassandra process, disk usage, number of file, ..
If problem found at host, than run correcting script on that host. Script launched manually.
Each script can be run by marathon application. But I couldn't found run application on (specific) error host.
No restriction of adding machines and installing mesos components.
And if you know more suitable tool, please recommend!!

If you are not running Cassandra on Mesos I think Marathon is not the best choice. From your description, it looks like you need a monitoring tool (e.g., Nagios) rather than service Orchestration.
Please extend your question with more information. It's not clear what you are asking.

Related

Make spark environment for cluster

I made a spark application that analyze file data. Since input file data size could be big, It's not enough to run my application as standalone. With one more physical machine, how should I make architecture for it?
I'm considering using mesos for cluster manager but pretty noobie at hdfs. Is there any way to make it without hdfs (for sharing file data)?
Spark maintain couple cluster modes. Yarn, Mesos and Standalone. You may start with the Standalone mode which means you work on your cluster file-system.
If you are running on Amazon EC2, you may refer to the following article in order to use Spark built-in scripts that loads Spark cluster automatically.
If you are running on an on-prem environment, the way to run in Standalone mode is as follows:
-Start a standalone master
./sbin/start-master.sh
-The master will print out a spark://HOST:PORT URL for itself. For each worker (machine) on your cluster use the URL in the following command:
./sbin/start-slave.sh <master-spark-URL>
-In order to validate that the worker was added to the cluster, you may refer to the following URL: http://localhost:8080 on your master machine and get Spark UI that shows more info about the cluster and its workers.
There are many more parameters to play with. For more info, please refer to this documentation
Hope I have managed to help! :)

Provision to start group of applications on same Mesos slave

I have cluster of 3 Mesos slaves, where I have two applications: “redis” and “memcached”. Where redis depends on memcached and the requirement is both of the applications/services should start on same node instead of different slave nodes.
So I have created the application group and added the dependency properly in the JSON file. After launching the JSON file via “v2/groups” REST API, I observe that sometime both application group will start on same node but sometimes it will start on different slaves which breaks our requirement.
So intent/requirement is; if any application fails to start on a slave both the application should failover to other slave node. Also can I configure the JSON file to tell Marathon to start the application group on slave-1 (specific slave first) if it is available else start it on other slave in a cluster. Due to some reason if this application group will start on other slave can Marathon relaunch the application group to slave-1 if it is available to serve the request.
Thanks in advance for help.
Edit/Update (2):
Mesos, Marathon, and DC/OS support for PODs is available now:
DC/OS: https://dcos.io/docs/1.9/usage/pods/using-pods/
Mesos: https://github.com/apache/mesos/blob/master/docs/nested-container-and-task-group.md
Marathon: https://github.com/mesosphere/marathon/blob/master/docs/docs/pods.md
I assume you are talking about marathon apps.
Marathon application groups don't have any semantics concerning co-location on the same node and the same is the case for dependencies.
You seem to be looking for a Kubernetes like Pod abstraction in marathon, which is on the roadmap but not yet available (see update above :-)).
Hope this helps!
I think this should be possible (as a workaround) if you specify the correct app contraints within the group's JSON.
Have a look at the example request at
https://mesosphere.github.io/marathon/docs/generated/api.html#v2_groups_post
and the constraints syntax at
https://mesosphere.github.io/marathon/docs/constraints.html
e.g.
"constraints": [["hostname", "CLUSTER", "slave-1"]]
should do. Downside is that there will be no automatic failover to another slave that way. Still, I'd be curious why both apps need to specifically run on the same slave node...

Have To Manually Start Hadoop Cluster on GCloud

I have been using a Hadoop cluster, created using Google's script, for a few months.
Every time I boot the machines I have to manually start Hadoop using:
sudo su hadoop
cd /home/hadoop/hadoop-install/sbin
./start-all.sh
Besides scripting, how can I resolve this?
Or is this just the way it is by default?
(The first boot after cluster creation always starts Hadoop automatically, why not always?)
You have to configure using init.d.
Document provide more details and sample script for datameer. You need to follow similar steps. Script should be smart enough to check all the nodes in the cluster are up before invoking this script using ssh.
While different third-party scripts and "getting started" solutions like Cloud Launcher have varying degrees of support for automatic restart of Hadoop on boot, the officially supported tools are bdutil as a do-it-yourself deployment tool, and Google Cloud Dataproc as a managed service, both of which are already configured with init.d and/or systemd to automatically start Hadoop on boot.
More detailed instructions on using bdutil here.

Configuring AWS cluster using automation script

We are looking for the possibility of an automation script which we can give how many master and data nodes we need and it would configure a cluster. Probably giving the credentials in a properties file.
Currently our approach is to login to the console and configure the Hadoop cluster. It would be great if there could be an automated way around it.
I've seen this done very nicely using Foreman, Chef, and Ambari Blueprints. Foreman was used to provision the VMs, Chef scripts were used to install Ambari, configure the Ambari blueprint, and to create the cluster using the Blueprint.

Docker container running Mesos cluster and running other docker containers on cluster (using Marathon)

I'm just starting off with Mesos, Docker and Marathon but I can't find anywhere where this specific question is answered.
I want to set up a Mesos cluster running on Docker - there are a couple of internet resources to do this, but then I want to run Docker containers on top of Mesos itself. This would then mean Docker containers running inside other Docker containers.
Is there a problem with this? It doesn't intuitively seem right somehow but would seem like it would be really handy to do so. Ideally I want to run Mesos cluster (with Marathon, Chronos etc.) and then run Hadoop within Docker containers on top of that. Is this possible or a standard way of doing things? Any other suggestions as to what good practice is would be appreciated.
Thanks
You should be able to run it, taking care of some issues when running the mesos (with Docker) containers, like running in privileged mode. Take a look to jpetazzo/dind to see how you can install and run docker in docker. Then you can setup mesos in that container to have one container with mesos and docker installed.
There are some references over the Internet similar to what you want to do. Check this article and this project that I think you will find very interesting.
There are definitely people running Mesos in docker containers, but you'll need to use privileged mode and set up some volumes if you want mesos to access the outer docker binary (see this thread).
Current biggest caveat: don't name your mesos-slave containers "mesos-*" or MESOS-2016 will bite you. See epic
MESOS-2115 for other remaining issues related to running mesos-slave in docker containers.

Resources