I use https://studio.apollographql.com/sandbox for all my local graphql development.
I can connect to one connection and save operations for the connection.
Is there any way, I can connect to multiple connections and save the operations?
So that, the next time I log in to the studio I can see all the connections and the saved operations.
If possible, could someone suggest any app for mac which does the job? Postman isn't convenient enough for graphql.
Related
I’m bulilding a web app that requires communication between clients. For this I’m using socket.io. Some data however has to be updated regularly in the database.
Some of them not that often (preferences, on button click) others in every second for example a timer value. This can not be calculated because the timer can be paused.
Right now whenever a client emits an event, it also makes a request to the backend to updated the database. I was wondering if it would be a good idea to have the socket.io server update the database so the clients would only have to take care of the socket communication? It seems to me that having the browser do a request to the backend is a bit resource heavy and takes out a bit from the advantages of the socket based communication
Edit: the back end of the app and the socket server are two different servers but physically they are on the same machine so their communication could be faster
the main point of using socket.io is that it allows you to push data to clients and clients do not need to check your server constantly to get the last changes, and providing a low-overhead communication channel between the server and the client.
you can call an API and also emit data and many other things on user click in your application.
it is a good idea to have the socket.io server update the database and you can also authorize each socket, save client sockets information and ...
I have to set sails app where I can have socket.io connections on multiple ports - for example authentication on port 3999 and data synchronization on port 4999.
Any way to do so ?
I asked a similar question yesterday and it seems that yours is also similar to mine, here's what I'm going to implement.
Given that you will have multiple instances that are going to work on different ports, they won't be able to talk to each other directly and that breaks websocket functionality.
It seems that there are multiple solutions to this (sticky sessions vs using the pub/sub functionality of Redis), I chose Redis. There's a module for that called socket.io-redis. You also need emitter module, it's here.
If you choose that route, no matter how many servers (multiple servers with multiple instances) OR many instances on a single server you run your app on, it will function without a problem thanks to Redis.
At least that's what I know for now, been searching for a few days, haven't tried it yet.
Not to mention, you can use Nginx for load balancing, like below. (Copied from socket.io docs)
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
server 127.0.0.1:6004;
}
How is it possible to ensure that Autobahn only creates a single connection?
Is it possible to either check for existing connections before calling connection.open, or perhaps kill all other connections on connection.onopen?
When using AutobahnJS (which I assume this relates to), you will generally not need to open multiple connections. If your application connects to a single WAMP server, then you can use the single connection for all WAMP actions while it persists, i.e. for all your subscriptions, publishes, registrations and calls.
During a designing of a client/server architecture, is there any advantage to multiplexing multiple WEBSOCKET connections from the same process to the server (i.e. sharing one connection) vs opening one WEBSOCKET connection per thread/session in the client (as is typically done when connecting to memcached or database servers.)
I'm aware about the overhead associated with each connection (e.g. RAM ...). But expect to have less than 1K-10K at the most in each client side.
Specific use case:
Lets assume, I have a remote server with multiple sessions on one side, and on the other side I have multiple clients, each client will connect to a different session through the websocket server.
In the remote server, there are 2 ways to implement it: (1) each session create its own websocket connection (2) all sessions will use same websocket connection.
From connection point of view, I like the sharing solution (one websocket connection to all sessions), because websocket server is limited by #of connections available (saving servers/scaling).
But from traffic/data speed/performance point of view, if a sessions will send lots of small packages through the connection, then, if we use one sharing connection, we will not be able to utilize the bandwidth (payload..../collect few small packages into one or split big package into small packages), because we may have to send different packages to different clients from different sessions, in this case, we will not be able to collect few packages (small packages) since they have different destination and from different sources!!, unless we will create "virtual connection" that manage each session data to maximize the speed, but this would create much implementation complexity!!!
Any other opinions?
Thanks,
JB.
I think you should consider using a limited connection pool, like they do with Database connection architecture.
Another solution I would consider is a Pub/Sub database middleman such as Redis. This allows you to use existing solutions as well as easier scalability.
To the best of my understanding, both having a single connection and using a multitude of connections have their issues.
For example, one connection can send only one message at a time. A big enough message could block the connection... are you moving big data?
Many connections can cause an overhead that could be very expensive as well as introduce more chances for errors. Consider the following:
Creating new connections is very expensive, uses bandwidth, suffers from longer network delays and requires local resources and this is exactly what websockets allows us to avoid.
You will run into scalability issues. For instance, Heroku limits websocket connections to 600 per server, or at least they did so a short while back (and I think it's reasonable)... How will you connect all the servers together to one data-store?
Remember every OS has an open file limit and that websockets use the IO architecture (each one is an 'open-file', so that websockets are a limited resource).
Regarding traffic/data speed/performance, it is a question of server architecture... but I believe you will actually see a slight speed increase by using one connection (or a small pool of connections). It's important to remember that there isn't any effective multi-tasking when you need to send TCP/IP packets.
Also, with a limited number of connections (even with one connection), you will be able to benefit from the OS's packet joining feature that will allow you to send a number of websocket frames over one TCP/IP packet (unless you constantly flush the TCP/IP socket). You will actually waste more bandwidth with more connections - even disregarding the bandwidth used to open each new connection.
Just my 5 cents, we will all think differently, I'm sure.
Good Luck!
I am working on designing a large scale web application that will heavily use WebSockets to communicate data between the client browser and our servers. The question that I have is how we are going to push application updates on our servers without our clients knowing that this is being done. We want to have as mush as 100% up time as possible.
The problem is with the WebSockets becoming closed if there is an application update being done on it. Is there a way to hand off an existing connection to another application server?
I was considering to implement some client side script that would automatically re-connect to the server if it was closed prematurely. However I would like to know from you smart people what other ideas that I can consider.
Thanks!