Does Mesos Master and Mesos agent require root access? - mesos

Does Mesos Master and Mesos agent require root access? what is the default permission level for Mesos master and Mesos agent? can they run with non-root access?

When I try to start my mesos cluster without root access, I got the error in the mesos slave:
Log file created at: 2018/02/17 06:57:48
Running on machine: ubuntu
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0217 06:57:48.811517 46316 main.cpp:468] EXIT with status 1: Failed to initialize systemd: Failed to create systemd slice ‘mesos_executors.slice’: Failed to write systemd slice `/run/systemd/system/mesos_executors.slice`: Failed to open file ‘/run/systemd/system/mesos_executors.slice’: Permission denied
My mesos cluster consists of three nodes, all the slave host got this error. So I started my mesos cluster with root access in mesos master. It worked.

You'll want to run mesos-agent with --no-systemd_enable_support if you don't want to provide it root access and you are okay without the provided systemd support.

No,
Mesos Master and Mesos agent doesn't need root access.Yes they can run with non-root acess

Related

failed to authenticate cluster nodes using pacemaker on centos7

I am trying to configure two node(node1 and node2 HA cluster using pacemaker on centos 7. I executed below steps on both nodes
yum install pcs
systemctl enable pcsd.service pacemaker.service corosync.service
systemctl start pcsd.service
passwd hacluster
After that execute below command on node1
pcs cluster auth node1 node2
i am getting below error
Error: Unable to communicate with node2 Error: Unable to
communicate with node1
I have also verified that both nodes are listening on port 2224 and also used telnet to verify that both nodes are able to connect to each other on 2224.
Need help.
The issue got resolved after using FQDN instead of hostname(node1.demo.in, node2.demo.in). below command worked fine.
pcs cluster auth node1.demo.in node2.demo.in
Don't know exact cause for this. Any Idea?

Hadoop Multinode Cluster, slave permission denied

I'm trying to do multinode cluster (actually with 2 nodes - 1 master and 1 slave) on Hadoop. I follow the instruction Multinode Cluster for Hadoop 2.x
When I execute the order:
./sbin/start-all.sh
I got the error message for my slave node:
slave: Permission denied (publickey)
I already modified both .ssh/authorized_keys files on master and slave and add the keyprint from .ssh/id_rsa.pub from master and slave.
Finally I restarted the ssh with the next command sudo service ssh restart also on the both nodes (master and slaves).
By the executing of the order ./sbin/start-all.sh I don't have a problem with the master node, but slave node get me back the error message permission denied.
Has anybody some ideas, why I can not see the slave node?
The execution of the jps order get me currently next result:
master
18339 Jps
17717 SecondaryNameNode
18022 NodeManager
17370 NameNode
17886 ResourceManager
slave
2317 Jps
I think, master is ok, but I have troubles with slave.
After ssh-keygen on the Master. Copy the id_rsa.pub to the authorized_keys using cat id_rsa.pub >> authorized_keys on all the slaves. Test the password-less ssh using:
ssh <slave_node_IP>
if you have copied the whole hadoop folder from master to slave nodes(for easy replication), make sure that the slave node's hadoop folder has the correct owner from the slave system.
chown * 777 <slave's username> </path/to/hadoop>
I ran this command on my slave system and it solved my problem.

Start Apache Mesos slave with Docker containerizer

I have a setup with Mesos and Aurora, I have dockerized my application which I need to deploy, now i have to start mesos slave with the docker support, but I'm not able to start the mesos slave with docker support, I'm trying the following:
sudo service mesos-slave --containerizers=docker,mesos start
this gives me
mesos-slave: unrecognized service
but if I try :
sudo service mesos-slave start
the slave gets activated.
Can anyone let me know how to solve this issue.
You should also inform people about what OS you're using, otherwise it's mostly guesswork.
Normally, your /etc/mesos-slave/containerizers should contain the following to enable Docker support:
docker,mesos
Then, you'd have to restart the service:
sudo service mesos-slave restart
References:
https://open.mesosphere.com/getting-started/install/#slave-setup
https://mesosphere.github.io/marathon/docs/native-docker.html
https://open.mesosphere.com/advanced-course/deploying-a-web-app-using-docker/

spark-submit in Amazon EC2

I've a linux instance in Amazon EC2 instance. I manually installed Spark in this instance and it's working fine. Next I wanted to set up a spark cluster in Amazon.
I ran the following command in ec2 folder:
spark-ec2 -k mykey -i mykey.pem -s 1 -t t2.micro launch mycluster
which successfully launched a master and a worker node. I can ssh into the master node using ssh -i mykey.pem ec2-user#master
I've also exported the keys: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I've a jar file (which has a simple Spark program) which I tried to submit to the master:
spark-submit --master spark://<master-ip>:7077 --deploy-mode cluster --class com.mycompany.SimpleApp ./spark.jar
But I get the following error:
Error connecting to master (akka.tcp://sparkMaster#<master>:7077).
Cause was: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#<master>:7077
No master is available, exiting.
I'm also updated EC2 security settings for master to accept all inbound traffic:
Type: All traffic, Protocol: All, Port Range: All, Source: 0.0.0.0/0
A common beginner mistake is to assume Spark communication follows a program to master and master to workers hierarchy whereas currently it does not.
When you run spark-submit your program attaches to a driver running locally, which communicates with the master to get an allocation of workers. The driver then communicates with the workers. You can see this kind of communications between driver (not master) and workers in a number of diagrams in this slide presentation on Spark at Stanford
It is important that the computer running spark-submit be able to communicate with all of the workers, and not simply the master. While you can start an additional EC2 instance in a security zone allowing access to the master and workers or alter the security zone to include your home PC, perhaps the easiest way is to simply log on to the master and run spark-submit, pyspark or spark-shell from the master node.

start-all.sh not working to run the process on slave node

I am trying to configure multinode cluster with one master and slave in my laptop. when i ran the start-all.sh from master all daemon process running in master node but Datanode and tasktracker is not starting on slave node. Password less ssh is enabled and i can do ssh for both master and slave from my masternode without pwd but if i try to do ssh master from slave node it is asking for pwd. is this a problem for not starting daemon process in slave node? do we required password less ssh on both master and slave?
ssh slave from slave node is not asking pwd only to master it is asking. Please give me some solution why i am not able to start the process in slave node from masternode?
You don't need password-less ssh from slave to master, only from master to slave.
A few things to consider:
Can you run hadoop locally on the slave node?
Is the slave node included in the $HADOOP_CONF_DIR/slaves file of the master?
Have you added the slave node in the /etc/hosts file of the master?
Are there any error messages in the log files of the slave?
Is the same version of hadoop installed on the same path in both machines?

Resources