can someone who has successfully settup a mesos development cluster on google cluster help me out.. i have the clutser running however I am having a hard time creating a vpn to conect to the clutser even though I have openvpn installed on my machine and I have downloded the openvpn file provided by mesos. I am using ubuntu 14. basically i have followed instruction to create the cluster but in order to access mesos, marathon I need to configure a vpn connection by using the openvpn file provided by mesosphere but I do not how to do it on ubuntu 14..
Have you tried openvpn --config <file.ovpn>?
Related
I have installed Docker Desktop and Kubernetes on a Windows machine.
When i run the kubectl get nodes command, I get the following output:
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane 2d1h v1.24.0
So my cluster/control-plane is running properly.
I have a second Windows machine on the same network (in fact its a VM) and I'm trying to add this second machine to the existing cluster.
From what I've seen the control-plane node has to have kubeadm installed but it seems it's only available for Linux.
Is there another tool for Windows-based clusters or is it not possible to do this?
Below are details of docker desktop from docker documentation.
Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration that runs on your machine. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster..
You can refer kubernetes documentation and create kubernetes cluster with all your windows machines.
The other windows machine can be joined into cluster. Please refer Kubernetes documentation for windows and install kubeadm and run kubeadm join ,which will bootstrap and join the node into kubernetes cluster.
It turns out that the control-plane can only run on a Linux node.
I suspect that the output from the kubectl get nodes command was from a control-plane running on the WSL that Docker-Desktop uses.
So the only option for running a master node on Windows, is to run in a Linux VM.
I am running a web application(copied from github examples) that is running as a container in a remote ubuntu VM. The application is a Node JS application that is using mysql database. I brought the application up using docker-compose in ubuntu.
The application came up as http://172....:3000 using a network port. The ip address is displayed in the docker-compose terminal. In the ubuntu system, when i do curl http://172....:3000, it gives a proper success response. The ip address is a container network address. It is not the VM's ip address. There is no firewall.
How to access the web application from my windows 7 machine. When I tried accessing using http://VM Ip address:3000, it is not hitting ubuntu system. I am not getting any message in the docker-compose terminal. Can anyone help here ?
ports:
- "3031:3000"
similar line in your docker compose means you have published port 3000 of your container to port 3031 of your Ubuntu VM.
now you can access your client service as http://<ubuntu-ip>:3031 but before this, you need to allow access to port 3031
I'am trying to use Hue as a file browser for HDFS. So for that I have clone the hue repository and build the app with the following commands given in README.md of the hue repository.
git clone https://github.com/cloudera/hue.git
cd hue
make apps
build/env/bin/hue runserver
Hue UI is accessible in local machine using default port using the url http://localhost:8000 and everything works fine. But when I use my machine ip address http://x.x.x.x:8000 and try to access the Hue UI it keeps on processing and waiting.
Other observations -:
I can ping from remote machine to the host machine.
There is no firewall blocking the ports. (checked with nmap port scanner)
Machines are in same network.
I can access other ports for Hadoop NameNodes UI and DataNodes.
Changing the http_host in hue.ini doesn't affect the result
The ideal setup for Hue is configuring a reverse proxy (Nginx or Apache HTTP, for example)
However, you should refer to the Configuration documentation to externally run the server outside of 127.0.0.1
[desktop]
# Webserver listens on this address and port
http_host=0.0.0.0
http_port=8888
I was able to find a solution to the issue.. First hue run on a CherryPy web server so starting server by command build/env/bin/hue runserver will start the development server where hue.ini configuration is neglected.
So the correct command to start the production server after setting up correct configuration in hue.ini file is build/env/bin/hue runcpserver. Then I was able to access it using remote host without any problem. You also can use supervisor to start the production server. More information about that can be found here
I have a setup with Mesos and Aurora, I have dockerized my application which I need to deploy, now i have to start mesos slave with the docker support, but I'm not able to start the mesos slave with docker support, I'm trying the following:
sudo service mesos-slave --containerizers=docker,mesos start
this gives me
mesos-slave: unrecognized service
but if I try :
sudo service mesos-slave start
the slave gets activated.
Can anyone let me know how to solve this issue.
You should also inform people about what OS you're using, otherwise it's mostly guesswork.
Normally, your /etc/mesos-slave/containerizers should contain the following to enable Docker support:
docker,mesos
Then, you'd have to restart the service:
sudo service mesos-slave restart
References:
https://open.mesosphere.com/getting-started/install/#slave-setup
https://mesosphere.github.io/marathon/docs/native-docker.html
https://open.mesosphere.com/advanced-course/deploying-a-web-app-using-docker/
I've a linux instance in Amazon EC2 instance. I manually installed Spark in this instance and it's working fine. Next I wanted to set up a spark cluster in Amazon.
I ran the following command in ec2 folder:
spark-ec2 -k mykey -i mykey.pem -s 1 -t t2.micro launch mycluster
which successfully launched a master and a worker node. I can ssh into the master node using ssh -i mykey.pem ec2-user#master
I've also exported the keys: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I've a jar file (which has a simple Spark program) which I tried to submit to the master:
spark-submit --master spark://<master-ip>:7077 --deploy-mode cluster --class com.mycompany.SimpleApp ./spark.jar
But I get the following error:
Error connecting to master (akka.tcp://sparkMaster#<master>:7077).
Cause was: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#<master>:7077
No master is available, exiting.
I'm also updated EC2 security settings for master to accept all inbound traffic:
Type: All traffic, Protocol: All, Port Range: All, Source: 0.0.0.0/0
A common beginner mistake is to assume Spark communication follows a program to master and master to workers hierarchy whereas currently it does not.
When you run spark-submit your program attaches to a driver running locally, which communicates with the master to get an allocation of workers. The driver then communicates with the workers. You can see this kind of communications between driver (not master) and workers in a number of diagrams in this slide presentation on Spark at Stanford
It is important that the computer running spark-submit be able to communicate with all of the workers, and not simply the master. While you can start an additional EC2 instance in a security zone allowing access to the master and workers or alter the security zone to include your home PC, perhaps the easiest way is to simply log on to the master and run spark-submit, pyspark or spark-shell from the master node.