How to fix below marathon authentication error - mesos

I have set 3 node mesos cluster using : How To Configure a Production-Ready Mesosphere Cluster on Ubuntu 14.04 (edited)
Next step is to Authenticate Marathon. For that I am using :
Framework Authentication
Just wanted to know am I using the correct URL or do I need to configure anything before using that url? (edited)
--acls=file:///tmp/mesos/config/acls
--credentials=file:///tmp/mesos/config/credentials . I have added those lines mesos.service(/etc/systemd/system/multiuser.targets/mesos-master.service) & to started mesos-master service(systemctl start mesos-master)
added --mesos_authentication
--mesos_authentication_principal marathon
--mesos_authentication_secret_file /tmp/mesos/config/marathon.secret
--mesos_role marathon lines in marathon.service(/etc/systemd/system/multiuser.targets/marathon.service) & to started marathon service(systemctl start marathon)
But Marathon is not getting authenticated. Did i miss any step/configuration

Related

Hadoop 2.6.4 Web UI Time Out

I installed Hadoop 2.6.4 on my AWS - 4 instance; 1 namenode; 1 secnamenode; 2 slaves. After the installation is completed, I tried seeing the namenode on Web UI using URL ec2-52-90-242-76.compute-1.amazonaws.com:50070 I am getting timed out.. anybody help??
If you are accessing from your system, you need to update your hosts files with IP address along with hostname or you can open directly with IP_address:50070
As well as check below
Check Firewall is on or off (Recommended is off)
Check Iptables service status (Recommended is stop)
Check SELINUX (Recommended is disables)

Passing environment variables into Mesos 0.25

I have recently upgraded to Mesos mesos-0.25.0-0.2.70 on CentOS 7. In order to set the DOCKER_HOST environment variable for Mesos, I had previously configured it with a file "/etc/mesos-slave/executor_environment_variables", the contents of which read:
{"DOCKER_HOST": "localhost:12375"}
With the upgrade of Mesos, and a newer Weave version this has stopped working. The latest version of Weave listens on a Unix socket before defaulting to a TCP socket, so I have now changed the contents of the aforementioned file to read:
{"DOCKER_HOST": "unix:///var/run/weave/weave.sock"}
Yet when I create a Docker container via Marathon it gets built in the Mesos cluster without any Weave IP or DNS. I am confused - all that needs to happen is for Mesos to pick up the environment variable DOCKER_HOST, which is not happening.
I'd be happy if anyone can throw pointers my way.
This is an old question but in case anyone stumbles on this one. I was having a similar issue where containers started by Mesos (via Marathon) were not registering with WeaveDNS. To get this to work, when starting up the mesos agent, I used the flag "--docker_socket" and set it equal to the 'DOCKER_HOST' path outputted when you run the command "weave env".
My containers started registering with WeaveDNS after this.

Marathon event subscriptions

have been trying to enable event subscriptions. I found the Marathon REST API. I attempted to restart marathon with the "--event_subscriber http_callback" and created the "event_subscriber" and "http_endpoints". When i restart it shows " --http_endpoints http://localhost:1234/" and I am running "nc -l -p 1234" to listen to the port. I am not getting anything when i create new apps.
It seems that i am having trouble enabling it. As i keep getting the error.
"http event callback system is not running on this Marathon instance. Please re-start this instance with \"--event_subscriber http_callback\"
Maybe i am missing something? Any help is much appreciated. Thanks.
Issue resolved! i fixed it by running the following command
marathon --jar --master zk://your_ip:5050,your_ip:5050,your_ip:5050/mesos --event_subscriber http_callback
and to get it to take restart marathon on ALL masters
sudo service marathon restart
Once back up check the page and you should be good to go.

How to install Redis Sentinel as a Windows service?

I am trying to set up a redis sentinel as a windows service on a Azure VM (IaaS).
I am using the MS OpenTech port of Redis for Windows and running the following command...
redis-server --service-install --service-name rdsent redis.sentinel.conf --sentinel
This command installs the service on my system but when I try to start this service either through the services control panel or through the following command...
redis-server --service-run --service-name rdsent redis.sentinel.conf --sentinel
Then the service fails to start with the following error...
HandleServiceCommands: system error caught. error code=1063, message = StartServiceCtrlDispatcherA failed: unknown error
Am I missing something here?
Please someone help me start this service make it work properly.
I had the same problem, and mine was related to my sentinel config. A number of articles I have found have some incorrect examples, so my service install would not work until the configuration was correct. Anyway, here is what you need at a minimum for your sentinel config (for Windows Redis 2.8.17):
sentinel monitor <name of redis cache> <server IP> <port> 2
sentinel down-after-milliseconds <name of redis cache> 4000
sentinel failover-timeout <name of redis cache> 180000
sentinel parallel-syncs <name of redis cache> 1
Once you have that setup, the original Redis service command above will work.
According to MSOpenTech, the following command should install Redis Sentinel as a service:
redis-server --service-install --service-name Sentinel1 sentinel.1.conf --sentinel
But when I used that command the installed service wouldn't start: it would immediately fail with error 1067, "The process terminated unexpectedly." Looking at service entry I'm guessing the problem is that the --service-name parameter isn't being filtered and ends up as part of the service executable path.
What I did find to work is installing the service manually with the SC command:
SC CREATE Sentinel1 binpath= "\"C:\Program Files\Redis\redis-server.exe\" --service-run sentinel.1.conf --sentinel"
Don't forget the required space after "binpath=", and obviously that path will have to reflect where you've installed redis-server.exe. Also after the service installed I edited the service entry so Redis Sentinel would run under the Network Service account.
I am using v3.0.501 and ran into the two issues below. While present it caused the service to fail on start without an error written to either the file log or the Event Log.
The configuration file must be the last parameter of the command line. If another parameter was last, such as --service-name, it would run fine when invoked the command line but would consistently fail went started as a service.
Since the service installs a Network Service by default, ensure that it has access to the directory where the log file will be written.
Once these two items were accounted for the redis as a service run smooth as silk.
Recently, I have found a way how to setup windows service for Redis and Sentinel.
During my setup, I encountered similar problem. I finally figured it out: it was caused by the configuration file path.
I have put all my configuration into my github project: https://github.com/dingyuliang/Windows-Redis-Sentinel-Config

CDH 5.1 host IP address change

I have a CDH 5.1 cluster with 3 nodes. We installed it using cloudera manager automated installation.
It was running perfect until we moved the box to a different network and IP addresses changed. I tried following steps
1. Stopped service, cloudera-scm-server.
2. Stopped service, cloudera-scm-agent
3. Edit the /etc/cloudera-scm-agent/config.ini
4. change the server host to the new ip.
5. restart service, cloudera-scm-agent, cloudera-scm-server.
not working .
Then i followed
http://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager/v4-latest/Cloudera-Manager-Administration-Guide/cmag_change_hostnames.html
Not helped even after changing the ips in the PostgreSQL directly.
I found following blog :
http://www.geovanie.me/changing-ip-of-node-in-cdh-cluster/
Getting following error in the scm-agent log file
ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized>
Not working ....
Can anyone please help how to change all IP addresses in a cdh 5.1 cluster safely .....
Thanks,
Amit
This is causing because of precious cloudera-scm-agent service wasn't stopped correctly, please try,
$> ps -ef | grep supervisord
$> kill -9 <processID>
then restart the agent again.
$>service cloudera-scm-agent start

Resources