I have successfully created and run tasks on mesos using marathon. However, marathon is supposed to support http callbacks when you start it using
--event_subscriber http_callback --http_endpoints http://myip:3000/endpoints
However, this does not seem to actually send any callbacks to my service. Is there anything else that is supposed to be used in order to use the callbacks?
My issue stemmed from the fact that I had multiple versions of marathon running. The first version of marathon, which was considered master, was not configured to use callbacks. The second marathon was, which was considered slave, was configured to use callbacks.
As the documentation states, all requests to a slave will be forwarded to the master.
Related
My question is a combination of this and this question on stackoverflow, however, answers there don't help me. I want to know that when in a Mesos cluster a task corresponding to a framework finishes how is the framework scheduler informed of this. The more details (like who initiates the communication, is there a lag?, what all information is included in the message etc..) you can provide better it will be for me. I was not able to find my answer even in the Mesos docs.
Frameworks are notified about tasks with Update event
Update Sent by the master whenever there is a status update that is generated by the executor, agent or master. Status updates should be used by executors to reliably communicate the status of the tasks that they manage. It is crucial that a terminal update (e.g., TASK_FINISHED, TASK_KILLED, TASK_FAILED) is sent by the executor as soon as the task terminates, in order for Mesos to release the resources allocated to the task. It is also the responsibility of the scheduler to explicitly acknowledge the receipt of status updates that are reliably retried. See ACKNOWLEDGE in the Calls section above for the semantics. Note that uuid and data are raw bytes encoded in Base64.
All communication (in V1 API) is imitated by a framework. Framework is calling subscribe method and keep connection open to revive updates. Basically when task is done communication looks like this: Task → Executor → Agent → Master → Framework
I've been configuring http healthchecks for all my apps in marathon which are working nicely, the trouble is marathon will keep stepping in and restarting a container failing it's healthcheck and I won't know unless I happen to be looking in the Marathon UI.
Is there a way to retrieve all apps that have a failed healthcheck so I can send an email alert or similar?
Marathon exposes information about failing healthcheck with event bus so you can write a simple service that will consume Marathons HealthChecks Event ("eventType": "instance_health_changed_event") and translate it to metric, alert you name it.
For a reference I can recommend allegro/appcop. This is the service that scales down unhealthy applications. Its code could be easily altered to do what you want.
I'm developing a publish subscribe processor with Eclipse Milo for Apache NiFi.
I have a service that handles most of the interaction with Eclipse Milo and the server and a controller that essentially just calls the service's functions.
The subscribing to nodes on the OPCUA server works fine, but I can't think of a good way to terminate the subscription, e.g. when I stop the processor.
The subscription, which "lives" in the service, survives the service getting disabled, as well as the controller being disabled/stopped. That means that the #OnStopped & #OnUnscheduled methods that I defined never get called, likely because the subscription never gets terminated. So I can't use these two methods.
I know that I can terminate threads in NiFi 1.7+, but I don't think that's a good way to handle this and also I'm still using 1.2.
Does anyone have any suggestions?
Update to the latest version, some problems with the way processors finish were fixed.
Recently I started learning Redis and have been able to do everything from learning aspect in 32 bit Windows. I am a .net developer and made caching available using Redis using ServiceStack client in a Web API setup. I have been able to successfully run a Redis cluster of 4 masters and 4 slaves, and was wondering how can I make that work in conjunction with the ServiceStack client.
My main concern is that if the master that I connect my client to, goes down, then how can the client automatically connect to some other available slave that takes over, as the port of that slave is going to be different. So failover is working at Redis level, but how the client handles it?
I recreated the mentioned scenario, using Redis Command Line Interface, but when I took the master down, the interface just stopped responding, as in everything was just going in a blackhole. So, per my experience, the cli does not automatically handles failover as a client.
I have started studying StackExchange's client to Redis, but still have the same question.
I am using Redis distribution given by Microsoft for learning purposes available at Github (Sorry, cannot provide link as I am new here and do not have sufficient reputation points).
Redis Sentinel are additional Redis processes which monitor the health of your Redis Master/Slaves and takes care of performing Automatic Failover when it detects that your Master instance is down. The Redis Config project provides a quick way to setup a popular Redis Sentinel Configuration.
The ServiceStack.Redis Client supports Redis Sentinel and implements the Recommended client Strategy which is what enables it to automatically recover after a failover by asking one of the Sentinels for the next available address to connect to, resuming operations with one of the available instances.
You can learn more about Redis Sentinel in the official Documentation.
Does any body have some info, links, pointer on how is cross process Eventbus communication is occurring. Per documentation I am concluding that multiple Vert.x (thus separate JVM processes) could be clustered on and communicate via Eventbus. However, there are little to none documentation on how to achieve it.
Looking into DOCs, I can see that publish/registerHandler methods take address as a String what works within a process, but I can not wrap my head around on how it works cross processes and how to register and publish to address, does it work over HTTP , TCP ? From API perspective do I need to pass port and process signature ?
Cross process communication happens via the EventBus. Multiple vertx instances can be started up and clustered to allow separate instances on the same or other machines to communicate. The low level clustering is handled by Hazelcast.The configuration is handled by the cluster.xml file in the conf folder of your vertx install. You can learn more about the format of the file by looking at the Hazelcast Docs. It is transparent to your handers and works over TCP.
You can test it by running two or more instances on your local machine once they are started with the -cluster flag. Look at the example being run, and the config changes required in How to use eventbus messaging in vertx?