I used following command to change the port
sudo ./bin/start --master local --zk zk://10.20.8.106:2181/marathon --http_port=7070
But it's not working, I am getting
[scallop] Error: Unknown option 'http_port=7070'.
As I am accessing ubuntu through putty
Is it necessary to give IP address of ubuntu machine instead of "local" in command .
I think you should get rid of the = char like this:
sudo ./bin/start --master local --zk zk://10.20.8.106:2181/marathon --http_port 7070
Have a look at
https://mesosphere.github.io/marathon/docs/command-line-flags.html
concerning all available options. Regarding the --master flag:
--master (Required): The URL of the Mesos master. The format is a comma-delimited list of of hosts like zk://host1:port,host2:port/mesos. If using ZooKeeper, pay particular attention to the leading zk:// and trailing /mesos! If not using ZooKeeper, standard URLs like http://localhost are also acceptable.
Related
I created a Spark Cluster using this repository and the relative documentation.
Now I'm trying to execute through spark-submit a job inside the Docker container of the Spark Master so the command that I use is something similar:
/path/bin/spark-submit --class uk.ac.ncl.NGS_SparkGATK.Pipeline \
--master spark://spark-master:7077 NGS-SparkGATK.jar HelloWorld
now the problem is that i receive Failed to connect to master spark-master:7077
I tried any combination: container IP, container ID, container name, localhost, 0.0.0.0, 127.0.0.1 but I receive always the same error.
While if I use --master local[*] the application works.
What am I missing?
the problem was to use the hostname for spark://spark-master:7077
So inside the Spark Master is something like this:
SPARK_MASTER_HOST=`hostname`
/path/bin/spark-submit --class uk.ac.ncl.NGS_SparkGATK.Pipeline \
--master spark://$SPARK_MASTER_HOST:7077 NGS-SparkGATK.jar HelloWorld
I have created a Spark application Hello World that works well locally through Eclipse IDE.
I would like to deploy remotely this application from my local machine to the virtualbox Cloudera machine, through the "spark-submit".
The command line used for that is:
C:\Users\S-LAMARTI\Desktop\AXA\Workspaces\AXA\helloworld\target>%SPARK_HOME%/spa
rk-submit --class com.saadlamarti.helloworld.App --master spark://192.168.56.102
:7077 --deploy-mode cluster helloworld-0.0.1-SNAPSHOT.jar
Unfortunately, the application doesn't work, and I get this message error:
15/10/12 12:20:40 WARN RestSubmissionClient: Unable to connect to server spark:/
/192.168.56.102:7077.
Warning: Master endpoint spark://192.168.56.102:7077 was not a REST server. Fall
ing back to legacy submission gateway instead.
Can someone have any idea, why is not working?
Remove the arguement --deploy-mode cluster and try again.
Check the master:8080,and then you can see two url,one is the client submit url,another is the rest for cluster.
Find your REST url, if you set the argument --deploy-mode cluster, you must set the argument --master spark:Rest url.
I have a spark cluster launched using spark-ec2 script.
(EDIT: after login into the master), I can run spark jobs locally on the master node as :
spark-submit --class myApp --master local myApp.jar
But I can't seem to run the job in the cluster mode:
../spark/bin/spark-submit --class myApp --master spark://54.111.111.111:7077 --deploy-mode cluster myApp.jar
The ip address of the master is obtained from the AWS console.
I get the following errors:
WARN RestSubmissionClient: Unable to connect to server
Warning: Master endpoint spark://54.111.111.111:7077 was not a REST server. Falling back to legacy submission gateway instead.
Error connecting to master (akka.tcp://sparkMaster#54.111.111.111:7077).
Cause was: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#54.177.156.236:7077
No master is available, exiting.
How to submit to a EC2 spark cluster ?
When you run with --master local you are also not connecting to the master. You are executing Spark operations in the same JVM as the application. (See docs.)
Your application code may be wrong too. So first just try to run spark-shell on the master node. /root/spark/bin/spark-shell is configured to connect to the EC2 Spark master when started without flags. If that works, you can try spark-shell --master spark://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com:7077 on your laptop. Be sure to use the external IP or hostname of the master machine.
If that works too, try running your application in client mode (without --deploy-mode cluster). Hopefully in the course of trying all these, you will figure out what was wrong with your original approach. Good luck!
This is nothing to do with EC2, I had similar error on my server. I was able to resolve it by overwriting spark-env.sh SPARK_MASTER_IP.
I'm trying to get this question solve,
To get mesos slave, is it we have to install Mesos and start mesos slave set up or?
And also I have problem with mesos master which I run a command
./bin/mesos-master.sh --ip=*** --work_dir=/var/lib/mesos
end up it does not continue to run so i stop it running. End up I run the same above command and I get error shown
Failed to initialize, bind: Address already in use [98]
Which part did I do wrongly?
You have to run mesos-master first and then you can connect mesos slave running on a different node to the master. You can refer to getting started guide of mesos. only one slave can connect to the master on the same port. If you get bind address already in use, you can try running slave on another port by passing --port=value parameter. Replace value with port number.
to start mesos master on localhost:
./bin/mesos-master.sh --ip=127.0.0.1 --work_dir=/var/lib/mesos
to start and connect slave to master
./bin/mesos-slave.sh --master=127.0.0.1:5050
to start and connect another slave to the same master you have to use another port as default port 5051 is already used by the first connected slave. Use argument --port-value to start slave on another port
./bin/mesos-slave.sh --master=127.0.0.1:5050 --port=5053
You may get a permission denied error. If so use sudo to access the given port
sudo ./bin/mesos-slave.sh --master=127.0.0.1:5050 --port=5053
You can run one more slave but you have to specify ip and a different workdir using
./mesos-slave.sh --master=<ipaddr>:<port> --ip=<ip of slave> --work_dir=<work_dir other than that of a running slave> --port=<another_port>
edit your etc/hosts and add more local ips with the following entries
127.0.0.2 slave2
127.0.0.3 slave3
then you can replace --ip=<ip of slave> with --ip=slave1 or --ip=slave2
You may have to replace <another_port> with ports like 5052,5053 or any available port if you have a running slave. The slave will be using the default port.
To run only a mesos-slave on a host is simple by installing the mesos package and only running the mesos-slave process with the correct flags, it's not a problem if the master is also installed, but be careful only to run the masters correct to the quorum number.
Something already running on the port you are trying to start the mesos-master, which has a web interface.
Check what program runs on the mesos default port, or use another port, more info about the command line documentation available here: Mesos configuration
To see what's using port 5050 or 5051 use either one of these commands:
sudo fuser -v 5050/tcp
sudo lsof -i | grep 5050
Both command will give you the process pid which holds the port. Either kill them or specify a new port for mesos by starting it with the correct port option:
./bin/mesos-master.sh --ip=*** --work_dir=/var/lib/mesos --port=FREE_PORT
Where do you specify the zookeepers for the mesos master and slaves? The following flags are required to start mesos-master (see the link I gave you):
--advertise_ip, --advertise_port, --quorum, --work_dir, --zk
What are your current full configuration for mesos master? You can find the files under related at /etc/mesos/, /etc/mesos-master/, /etc/mesos-slave/, /etc/defaults/mesos, /etc/defaults/mesos-master, /etc/defaults/mesos-slave. If you copy paste the lines from them and the mesos log here, we might give you more help.
Also please explain the cluster you would like to set up (Number of hosts, masters, slaves) and we can also help there.
excecute below command :
sudo netstat -peanut
Then check which process is using the port 5050 and 5051.
Kill those process using the pid.
Start the mesos master and slave again.
This happens to me when I killed the mesos slave accidentally and then restarted it but failed with address-bind issue.
I downloaded a new pre-build spark for hadoop 2.2 file. Following this document, I want to launch my master on my single machine. After untar the file, I enter the sbin and start-master, but I face this strange problem, here is the log:
Spark Command: /Library/Java/JavaVirtualMachines/jdk1.7.0_55.jdk/Contents/Home/bin/java -cp :/opt/spark-0.9.0-incubating-bin-hadoop2/conf:/opt/spark-0.9.0-incubating-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar -Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m org.apache.spark.deploy.master.Master --ip bogon --port 7077 --webui-port 8080
========================================
log4j:WARN No appenders could be found for logger (akka.event.slf4j.Slf4jLogger).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: bogon/125.211.213.133:7077
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:391)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:388)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
What's that bogon? And where is the IP 125.211.213.133(not my IP) comes from? What's the problem here?
"bogon" comes from the command line provided. You probably forgot to replace the parameter --ip to the local ip of your host.
When using the sbin/start-master.sh, if not IP is provided, the reported hostname of the machine is used:
start-master.sh
if [ "$SPARK_MASTER_IP" = "" ]; then
SPARK_MASTER_IP=`hostname`
fi
If the reported hostname is not right, you can provide Spark with is IP by setting the env variable.
SPARK_MASTER_IP=172.17.0.1 start-master.sh
check your hostname by run the command hostname if you are linux env. And I think 125.211.213.133 is the IP for bogon, and you mistakenly set your hostname to "bogon".
For quick fix, you can run command hostname localhost and try again.