I have seen lot of examples of Mule Object Store being used to store transactional data. However my question is can it be used to store something like a Websockets connection or Server Sent Events(SSe) Connection?
As these are long running connections. THey need to be stored somewhere for later use when another event comes in & needs to be transmitted on the same connection. What is the best practice for storing this connection information.
Obviously connection-id can be used for as a Key and hashed form of the Connection can be stored as Value? Is this feasible? Is there an example anybody can point me to?
A webSocket connection is a live object that cannot be persisted the way a simple JSON object can. It contains pointers to an actual TCP socket internal to its implementation. So, you need to store it in some sort of in-memory and in-process store or data structure.
Since it sounds like you already have a simple key to access it by with an ID, what I would suggest is either a Map or a WeakMap object.
let socketMap = new Map();
Any time a new socket connects:
// add new connection to the map
socketMap.set(id, socket);
Any time a socket disconnects:
// remove disconnected socket from the map
socketMap.delete(id);
Any time you need to get a particular socket:
let socket = socketMap.get(id);
Related
I use ArangoDB/Go (using go-driver) and need to implement multi-tenancy, means every customer is going to have his data in a separate DB.
What I'm trying to figure out is how to make this multi-tenancy work. I understand that it's not sustainable to create a new DB connection for each request, means I have to maintain a pool of connections (not a typical connection pool tho). Of course, I can't just assume that I can make limitless, there has to be a limit. However, the more I think about that the more I understand that I need some advice on it. I'm new to Go, coming from the PHP world, and obviously it's a completely different paradigm in PHP.
Some details
I have an API (written in Go) which talks to ArangoDb using arangodb/go-driver. A standard way of creating a DB connection is
create a connection
conn, err := graphHTTP.NewConnection(...)
create client
c, err := graphDriver.NewClient(...)
create DB connection
graphDB, err := p.cl.Database(...)
This works if one has only one DB, and DB connection is created on API's boot up.
In my case it's many, and, as previously suggested, I need to maintain a DB connections pool.
Where it gets fuzzy for me is how to maintain this pool, keep in mind that pool has to have a limit.
Say, my pool is of size 5, abd over time it's been filled up with the connections. A new request comes in, and it needs a connection to a DB which is not in the pool.
The way I see it, I have only 2 options:
Kill one of the pooled connections, if it's not used
Wait till #1 can be done, or throw an error if waiting time is too long.
The biggest unknow, and this is mainly because I've never done anything like this, for me is how to track whether connection is being used or not.
What makes thing even more complex is that DB connection has it's own pool, it's done on the transport level.
Any recommendations on how to approach this task?
I implemented this in a Java proof of concept SaaS application a few months ago.
My approach can be described in a high level as:
Create a Concurrent Queue to hold the Java Driver instances (Java driver has connection pooling build in)
Use subdomain to determine which SaaS client is being used (can use URL param but I don't like that approach)
Reference the correct Connection from Queue based on SaaS client or create a new one if not in the Queue.
Continue with request.
This was fairly trivial by naming each DB to match the subdomain, but a lookup from the _systemdb could also be used.
*Edit
The Concurrent Queue holds at most one Driver object per database and hence the size at most will match the number of databases. In my testing I did not manage the size of this Queue at all.
A good server should be able to hold hundreds of these or even thousands depending on memory, and a load balancing strategy can be used to split clients into different server clusters if scaling large enough. A worker thread could also be used to remove objects based on age but that might interfere with throughput.
I'm learning how to use Socket.IO, and I'm building a small game.
When someone creates a room, I save the room values in a array.
var clients = [], rooms = [];
...
rooms.push(JSON.parse(roomData));
But if the server crashes, the server loses all the rooms Data.
Is it a good idea to save the data into a Database and repopulate the array with these values when the user connects to the server?
Thank you.
Restoring socket.io connection state after a server crash is a complicated topic that depends a lot on exactly what you're doing and what the state is. Sometimes the client can hold most of the state, sometimes it must be persisted on the server.
State can be stored to disk, to another in memory process like redis or in the client and presented when they reconnect.
You just have to devise a sequence of events on your server and then when the client reconnects for how everything gets restored. You will also likely need persistent client IDs so you know which client is which when they reconnect.
There are many different ways to do it. So, yes you could use a DB or you could do it a different way. There is no single "best" way because it depends upon your particular circumstances and tools you are already using.
I'm trying to implement the QUIC protocol in the linux kernel. QUIC works on top of UDP to provide a connection-oriented, reliable data transfer.
QUIC was designed to reduce the number of handshakes required between sessions as compared to TCP.
Now, I need to store some data from my current QUIC session so that I can use it when the session ends and later on use it to initiate a new session. I'm at a loss about where should this data be stored so that it's not deleted between sessions.
EDIT 1: The data needs to be stored till the socket lives in the memory. Once the socket has been destroyed, I don't need the data anymore.
As an aside, how can I store data even between different sockets? Just need a general answer to this as I don't need it for now.
Thank you.
Suppose TCP proxy has forwarded request back to the backend server. When it receives reply from the backend server, how does it knows which client to reply. What exact session information does a proxy stores?
Can anyone please throw some light on this
It depends on the protocol, it depends on the proxy, and it depends on whether transparency is a goal. Addressing all of these points exhaustively would take forever, so let's consider a simplistic case.
A network connection in software is usually represented by some sort of handle (whether that's a file descriptor or some other resource). In a C program on a POSIX system, we could simply keep two file descriptors associated with each other:
struct proxy_session {
int client_fd;
int server_fd;
}
This is the bare-minimum requirement.
When a client connects, we allocate one of these structures. There may be a protocol that lets us know what backend we should use, or we may be doing load balancing and picking backends ourselves.
Once we've picked a backend (either by virtue of having parsed the protocol or through having made some form routing decision), we initiate a connection to it. Simplistically, a proxy (as an intermediary) simply forwards packets between a client and a server.
We can use any number of interfaces for tying these two things together. On Linux, for example, epoll(2) allows us to associate a pointer to events on a file descriptor. We can provide it a pointer to our proxy_session structure for both the client and server side. When data comes in either of those file descriptors, we know where to map it.
Lacking such an interface, we necessarily have a means for differentiating connection handles (whether they're file descriptors, pointers, or some other representation). We could then use a structure like a hash table to look up the destination for a handle. The solution is found simply by being able to differentiate connections to each other, and holding some state that "glues" two connections together.
I am experimenting with ZeroMQ where I want to create a server that does :
REQ-PIPELINE-REPLY
I want to sequentially receives data query requests, push it through a inproc pipeline to parallelise the data query and the sink merges the data back. After the sink merges the data together, the sink sends the merged data as the reply back to the request.
Is this possible? How would it look? I am not sure if the push/pull will preserve client's address for the REP socket to send back to.
Assuming that each client has only a single request out at any one time.
Is this possible?
Yes, but with different socket types.
How would it look?
(in C)
What you may like to do is shift from a ZMQ_REP socket on the external server socket to a ZMQ_ROUTER socket. The Router/Dealer sockets have identities which can allow you to have multiple requests in your pipeline and still respond correctly to each.
The Asynchronous Client/Server Pattern:
http://zguide.zeromq.org/php:chapter3#The-Asynchronous-Client-Server-Pattern
The only hitch in this is that you will need to manage the multiple parts of the ZMQ message. The first part is the identity. Second is null. Third is the data. As long as you REPLY in the same order as the REQUEST the identity will guide your response's data to the correct client. I wrapped my requests in a struct:
struct msg {
zmq_msg * identity;
zmq_msg * nullMsg;
zmq_msg * data;
};
Make sure to use zmq_msg_more when receiving messages and set the more flag when sending correctly.
I am not sure if the push/pull will preserve client's address for the
REP socket to send back to.
You are correct. A push pull pattern would not allow for specifying of the return address between multiple clients.