Redis cluster ready client - windows

Recently I started learning Redis and have been able to do everything from learning aspect in 32 bit Windows. I am a .net developer and made caching available using Redis using ServiceStack client in a Web API setup. I have been able to successfully run a Redis cluster of 4 masters and 4 slaves, and was wondering how can I make that work in conjunction with the ServiceStack client.
My main concern is that if the master that I connect my client to, goes down, then how can the client automatically connect to some other available slave that takes over, as the port of that slave is going to be different. So failover is working at Redis level, but how the client handles it?
I recreated the mentioned scenario, using Redis Command Line Interface, but when I took the master down, the interface just stopped responding, as in everything was just going in a blackhole. So, per my experience, the cli does not automatically handles failover as a client.
I have started studying StackExchange's client to Redis, but still have the same question.
I am using Redis distribution given by Microsoft for learning purposes available at Github (Sorry, cannot provide link as I am new here and do not have sufficient reputation points).

Redis Sentinel are additional Redis processes which monitor the health of your Redis Master/Slaves and takes care of performing Automatic Failover when it detects that your Master instance is down. The Redis Config project provides a quick way to setup a popular Redis Sentinel Configuration.
The ServiceStack.Redis Client supports Redis Sentinel and implements the Recommended client Strategy which is what enables it to automatically recover after a failover by asking one of the Sentinels for the next available address to connect to, resuming operations with one of the available instances.
You can learn more about Redis Sentinel in the official Documentation.

Related

How can i Run Web socket In Apache Flink Serverless Java

I have a Java Program to run in Apache flink in AWS i want to run
real time communication through web socket how can i integrate serverless web socket in Apache flink Java ???
Thanks You
Flink is designed to help you process and move data continuously between storage or streaming solutions. It is not intended to, and would not work well with websockets directly for these reasons:
When submitting a job, the runtime serializes your logic and moves it to other TaskManager instances so that it can parallelize them. These can be on another machine entirely. Now, if you were intending to service a websocket with that code, it has just moved elsewhere!
TaskManagers can be stopped and restarted (scaling event, recovering from a checkpoint/savepoint, etc). That's where your websocket connection will be cut.
Also, the Flink planner can decide that your source functions need be read twice if it helps the processing. This means that your websockets would need to maintain a history of messages received, and make sure they are sent once to each operator instance.
This being said you can have a webserver managing the websocket, piping messages back and forth to a Kafka topic, which then Flink can operate on.
Since you're talking about AWS, I suggest you learn about their Websocket API Gateway service. I believe these can be connected easily with Kinesis, which Flink can read from and write to easily.

I wonder how we can setup the clients to continue working even if all hazelcast cluster is down?

I wonder how we can setup the clients to continue working even if the hazelcast remote cluster is down?
What you are asking for is a capability called non-stop client that will allow clients to continue to work normally and retrieve data from NearCache if there is one enabled. This is slated to come out in near future.
Other than that, you can set various timeouts on clients, see below for example:
Connection retry: https://docs.hazelcast.org/docs/4.0.1/manual/html-single/index.html#configuring-client-connection-retry
Operation retry: https://docs.hazelcast.org/docs/4.0.1/manual/html-single/index.html#enabling-redo-operation
Various disconnect handling scenarios are captured here, will certainly help: https://docs.hazelcast.org/docs/4.0.1/manual/html-single/index.html#enabling-redo-operation

Vertx clustering alternative

Anyone with real-world experience of Vertx cluster managers other than Hazelcast have advice on our requirement below?
For our (real time sensor data) system we have hundreds of verticles in multiple JVM's, but we do not need, or want, the eventbus to span multiple physical servers.
We're running Vertx on multiple servers but our platform is less complex if we don't pool a single eventbus between all of them (we prefer to be explicit about passing messages between servers).
Hazelcast is the wrong cluster manager for us. We don't need its peer discovery between servers, but crucially any release change of Hazelcast means that new clients cannot join a cluster with existing running clients running the previous version so bringing up one new verticle compiled with vertx 3.6.3 into an existing cluster is not possible unless we stop the entire cluster and restart it with all the verticles recompiled to 3.6.3. This seriously impacts our development. It's helpful for the verticles to be more plug-and-play and vertx can do that but Hazelcast can't (due to constant version incompatibilities).
Can anyone recommend a vertx cluster manager that fits our use case?
I've now had time to review each of the alternatives Vertx directly supports as a 'cluster manager' (Hazelcast, Zookeeper, Ignite, Infinispan) and we're proceeding with a Zookeeper architecture for our system, replacing Hazelcast:
Here's the background to our decision:
We started as a fairly typical (if there is such a thing) Vertx development with multiple verticles in a JVM responding to external events (urban sensor data entering our java/vertx feed handlers) published on the eventbus and the data being processed asynchronously in many other vertx verticles, often involving them publishing new derived data as new asynchronous messages.
Quite quickly we wanted to use multiple JVM's, mainly to isolate the feedhandlers from the rest of the code so if things broke the feedhandlers would keep running (as a failsafe they're persisting the data as well as publishing it). So we added (easily) Vertx clustering so the JVM's on the same machine could communicate and all verticles could publish/subscribe messages in the same system. We used the default cluster manager, Hazelcast, and modified the config so the vertx clustering is limited to the single server (we run multiple versions of the entire platform on different servers and don't want them confusing each other). We have hundreds of verticles in half-a-dozen JVM's.
Our environment (search SmartCambridge vertx) is fairly dynamic with rapid development cycles (e.g. to create a new feedhandler and have it publishing its data on the eventbus) and that means we commonly wish to start up a JVM containing these new verticles and have it join an existing vertx cluster, maybe permanently, maybe just for a while. Vertx/Hazelcast has joining a (vertx) cluster as a fairly serious operation, i.e. Hazelcast has (I believe) a concept of Hazelcast cluster members and Hazelcast clients, where clients can come and go easily but joining a Hazelcast cluster as a member requires considerable code compatibility between the existing cluster and the new member. Each time we upgraded our Vertx library the Hazelcast library version would change and this made it impossible for a newly compiled vertx verticle to join an existing vertx cluster.
Note we have experimented with having the Vertx eventbus flow between multiple servers, and also extend the eventbus into the browser/javascript, but in both cases have found it simpler/more robust to be explicit about routing messages from server to server and have written verticles specifically for that purpose.
So the new plan (after several years of Vertx development), given our environment of 5 production/development servers but with the vertx eventbus always limited to single servers, is to implement a single Zookeeper cluster across all 5 servers so we get the Zookeeper native resilience goodness, and configure each production server to use a different znode root (the default is 'io.vertx' but this is a simple config option).
This design has an attractive simple minimum build on a single server (i.e Zookeeper + Vertx) so ad-hoc development on a random machine (e.g. laptop) is still possible but we can extend our platform to have multiple servers in a single vertx cluster trivially by setting a common znode root.

What exactly is a 'node' in Redis

I'm reading around Redis at the moment and trying to find a good understanding of what a 'node' is terms of how Redis works. Am I right to think of it in the same was as an endpoint?
In Redis' context, a node is a server running one or more redis-server processes.
Endpoint is a network address through which you can access one or more such processes, depending on how Redis is clustered.
When using the open source Redis cluster, an endpoint is any of the processes - meaning a node's address and the port that the process listens to. Redis client libraries use the protocol to interrogate the clustered redis-server process about other members of the cluster (again, processes listening on ports on nodes), so they can establish connections to other endpoints accordingly.
Disclaimer: it appears that you're asking about AWS ElastiCache, which may or may not be using the OSS implementation in whole or partially. I do not claim to have any knowledge on that subject.
Its a type of (temporary memory [RAM]) to which network is attached. Its the smallest unit where frequently accessed data is stored by following lazy loading or write through strategy. A collection of such nodes ,where a predefined Redis process is running on each node , is called cluster.
More on node :
https://redis.io/commands/cluster-nodes/

What's the difference between using the ActiveMQ MasterSlave Discovery and Shared Config?

On the activemq MasterSlave page, they introduce a few ways for setting that up using either JDBC, Shared File, or LevelDB Store.
However, on the Network of Brokers page, they talk about the MasterSlave Discovery without the need of setting up one of the shared configuration (JDBC, File, or LevelDB Store).
<networkConnectors>
<networkConnector uri="masterslave:(tcp://host1:61616,tcp://host2:61616,tcp://..)"/>
</networkConnectors>
What are the differences between using the MasterSlave Discovery and Shared Configuration? When should I should one or the other?
JDBC, Shared File or Replicated LevelDB are all options to create a high available persistance store that can be access by masters and it's slave(s). Note that LevelDB store is not Shared, but replicated.
If you want to connect a broker via network connection (network of brokers) to another logical broker that consists of a master and a slave, the masterslave: uri prefix is a shorthand for the failover prefix with less typing.
So, MasterSlave Discovery and Shard Configuration are totally different things.
What you should compare is instead a shared persistence store (JDBC, Shared file) vs a replicated LevelDB store (share nothing). The later will allow you to setup totally independent brokers that act as a failover cluster, without the need to share a disk or database.
one issue if you are using the masterslave discovery uri
that is the cpu is high usage (>90%)
the workaround way
There is an interesting discussion going on ActiveMQ user forum about the same : http://activemq.2283324.n4.nabble.com/Avoiding-shared-state-between-master-and-slave-brokers-td4686401.html
I am also confused about this :
Is there any way to achieve shared nothing fully replicated configuration in a network of brokers wherein there is only one master at a time and all clients are connected to this one instance (with support for reelection of new master when current master goes away)?

Resources