Dynomitedb and ruby gem 'redis' - ruby

We are using Ruby gem to 'Redis' to connect to dynomite from our ruby application. If the redis in that node is not available or getting killed, the requests are not getting forwarded to nodes or replicas in other racks.
Is there any configuration we have to set to forward requests to other nodes when redis in that machine is not available ?
Is it is the feature not available in dynomite ?
Do I have to use some way or some other gem instead of redis to connect to dynomite ?
Please help

I'm just researching this myself; I've discovered Dynomite doesn't provide any failover or anything like that. However, their own Dyno java client does provide this functionality.
Any client that can talk to Memcached or Redis can talk to Dynomite -
no change needed. However, there will be a few things missing,
including failover strategy, request throttling, connection pooling,
etc., unless our Dyno client is used (more details to in the Client
Architecture section).
Source: http://techblog.netflix.com/2014/11/introducing-dynomite.html

Related

Laravel server to client communication

I need to implement a server to client communication. Long polling sounds like a non-optimal solution. Sockets would be great. I'm looking at this package:
https://github.com/beyondcode/laravel-websockets
My server is running on AWS Elastic Beanstalk (with a second worker environment for queue and cron).
Does anyone have experience setting up a socket connection in Elastic Beanstalk? In particular, how can I start up a socket server using ebextensions (or any way at all). Looks like I should be using supervisor for the server.
Should this server live in the worker environment? Can it? I don't know much about the moving parts here. Anything is helpful :)
Laravel comes with out of the box broadcasting tool : Laravel Echo.
And you can either use it with a local instance (by deploying Redis for example), or you can use an API or an external tool (Socket.IO, Pusher ..)
Take a look at the documentation https://laravel.com/docs/5.8/broadcasting

how to terminate inactive websocket connection in passenger

Past few days we are struggling with inactive websocket connections. The problem may lay on network level. I would like to ask if there is any switch/configuration option to set timeout for websocket connection for Pushion Passenger in standalone mode.
You should probably solve this at the application level, because solving it in other layers will be more ugly (less knowledge about websocket).
With Passenger Standalone you could try to set max_requests. This should cause application processes to be restarted semi-regularly, and when shutting down a process Passenger should normally abort websocket connections.
If you want more control over the restart period you could also use for example a cron job that executes rolling restarts every so often, which shouldn't be noticeable to users either.
Websockets in Ruby and Passenger (and maybe node.js as well) aren't "native" to the server. Instead, your application "hijacks" the socket and controls all the details (timeouts, parsing etc').
This means that a solution must be implemented in the application layer (or whatever framework you're using), since Passenger doesn't keep any information about the socket any more.
I know this isn't the answer you wanted, but it's the fact of the matter.
Some approaches use native websockets where the server controls websocket connections (timeouts, parsing etc', i.e. the Ruby MRI iodine server), but mostly websockets are "hijacked" from the server and the application takes full control and ownership of the connections.

Enabling sockets.io in sails to run on multiple ports

I have to set sails app where I can have socket.io connections on multiple ports - for example authentication on port 3999 and data synchronization on port 4999.
Any way to do so ?
I asked a similar question yesterday and it seems that yours is also similar to mine, here's what I'm going to implement.
Given that you will have multiple instances that are going to work on different ports, they won't be able to talk to each other directly and that breaks websocket functionality.
It seems that there are multiple solutions to this (sticky sessions vs using the pub/sub functionality of Redis), I chose Redis. There's a module for that called socket.io-redis. You also need emitter module, it's here.
If you choose that route, no matter how many servers (multiple servers with multiple instances) OR many instances on a single server you run your app on, it will function without a problem thanks to Redis.
At least that's what I know for now, been searching for a few days, haven't tried it yet.
Not to mention, you can use Nginx for load balancing, like below. (Copied from socket.io docs)
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
server 127.0.0.1:6004;
}

How do you use Thrift protocol via corporate Proxy?

I've had a search over the internet but can't seem to find any straightforward instructions on how to use the Thrift protocol from behind a proxy.
To give you a bit of background - we have a Zipkin instance setup (https://github.com/twitter/zipkin) that uses a Cassandra instance (http://cassandra.apache.org/) to store Zipkin traces. Our intention is to negotiate over the thrift protocol to a collector that is then responsible for writing traces to Cassandra.
What conditions have to be in place for us to negotiate successfully via our corporate proxy? Do we just have to set certain proxy properties when trying to negotiate or do we have to set something else up that allows this negotiation to happen?
Any help people can give in this direction with regards to resources and/or an answer would be greatly appreciated.
The Apache Thrift TSocketTransport (almost certainly what you are using) uses TCP on a configurable port. Cassandra usually uses port 9160 for thrift. When using Thrift/TCP no HTTP setup is necessary. Just open 9160 (and any other ports your custom thrift servers may be listening on).
Though you can use Thrift over HTTP, Thrift is RPC, not REST, so proxy caching will cause problems, the client needs a direct comm channel with the server.
If you do need to access a thrift service via a proxy, something like this would work:
https://github.com/totally/thrift_goodies/blob/master/transport.py
You can kill the kerberos stuff if you don't need that.

Resuming persistent sessions while switching to a different mosquitto broker

Can anyone tell me how I can resume persistent session on a different broker while I am switching brokers for load balancing in mosquitto. I am really confused and can't find a way out.
Short answer, you don't.
All the persistent session information is held in the broker and mosquitto has no way to share that information between instances

Resources