Can a client subscribe to multiple graphql subscriptions in one connection? - graphql

Say we have a schema like this:
type Subscription {
objectAddedA: ObjectA
objectAddedB: ObjectB
}
Can a graphql client subscribe to both the objectAddedA and objectAddedB subscriptions at the same time? I'm having a hard time finding good examples of subscriptions on the web, and the graphql docs don't seem to mention them at all unless I'm missing it. We are designing a system that runs in kubernetes where a single pod will be getting api requests to add/update/delete configuration and we want to use graphql subscriptions to push these changes to any pods that care about them (they would be the graphql clients). However there are going to be lots of different object types and potentially several different types of events that they will want to be notified about at any time, so not sure if you can subscribe to several different subscriptions at once or if you have to designed the schema in a way that a single subscription will give all the possible events you'll need.

It is possible with graphql-python/gql
See the documentation here
Extract:
# First define all your queries using a session argument:
async def execute_query1(session):
result = await session.execute(query1)
print(result)
async def execute_query2(session):
result = await session.execute(query2)
print(result)
async def execute_subscription1(session):
async for result in session.subscribe(subscription1):
print(result)
async def execute_subscription2(session):
async for result in session.subscribe(subscription2):
print(result)
# Then create a couroutine which will connect to your API and run all your queries as tasks.
# We use a `backoff` decorator to reconnect using exponential backoff in case of connection failure.
#backoff.on_exception(backoff.expo, Exception, max_time=300)
async def graphql_connection():
transport = WebsocketsTransport(url="wss://YOUR_URL")
client = Client(transport=transport, fetch_schema_from_transport=True)
async with client as session:
task1 = asyncio.create_task(execute_query1(session))
task2 = asyncio.create_task(execute_query2(session))
task3 = asyncio.create_task(execute_subscription1(session))
task4 = asyncio.create_task(execute_subscription2(session))
await asyncio.gather(task1, task2, task3, task4)
asyncio.run(graphql_connection())

Actually, the GraphQL standard explicitly says that
Subscription operations must have exactly one root field.
Python's "graphql-core" library ensures it through a validation rule. Libraries that are based on it (graphene, ariadne and strawberry) would follow this rule as well.
This is what the server says if you attempt multiple subscriptions in one request:
"error": {
"message": "Anonymous Subscription must select only one top level field.",
You can remove this validation rule and see what happens, but remember that your in the no-standards land now, and things usually don't end well there... :D

Related

nestjs microservices - have one clientProxy to publish message to any microService

Sometimes, you want to say, "I have this message, who can handle it?"
In nestjs a client proxy is bounded directly to a single microservice.
So, as an example, let say that I have the following micro-services:
CleaningService, FixingService.
Both of the above can handle the message car, but only CleaningService can handle the message glass.
So, I want to have something like:
this.generalProxy.emit('car', {id: 2});
In this case, I want 2 different microservices to handle the car: CleaningService and FixingService.
in this case:
this.generalProxy.emit('glass', {id: 5});
I want only CleaningService to handle it.
How is that possible? how can I create clientProxy that is not bonded directly to a specific microservice.
The underlying transport layer matters because despite the fact that there is an abstraction in front of the different transports each underlying one has completely different characteristics and capabilities. The type of messaging pattern you're talking about is simple to accomplish with RabbitMQ because it has the notion of exchanges, queues, publisher, subscribers etc while a TCP based microservice requires a connection from one service to another. Likewise, the Redis transport layer uses simple channels without the necessary underlying implementation to be able to support some messages being fanned out to multiple subscribers and some going directly to specific subscribers.
This might not be the most popular opinion but I've been using NestJS professionally for over 3 years and I can definitely say that the official microservices packages are not sufficient for most actual production applications. They work great as a proof of concept but quickly fall apart because of exactly these types of issues.
Luckily, NestJS provides great building blocks and primitives in the form of the Module and DI system to allow for much more feature rich plugins to be built. I created one specifically for RabbitMQ to be able to support the exact type of scenario you are describing.
I highly recommend that since you're using RabbitMQ already that you check out #golevelup/nestjs-rabbitmq which can easily support what you want to accomplish using native RMQ concepts like Exchanges and Routing Keys. (Disclaimer: I am the author). It also allows you to manage as many exchanges and queues as you like (instead of being forced to try to push all things through a single queue) and has native support for multiple messaging patterns including PubSub and RPC.
You simply decorate your methods that you want to act as microservice message handlers with the appropriate metadata and messaging will just work as expected. For example:
#Injectable()
export class CleaningService {
#RabbitSubscribe({
exchange: 'app',
routingKey: 'cars',
queue: 'cleaning-cars',
})
public async cleanCar(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
#RabbitSubscribe({
exchange: 'app',
routingKey: 'glass',
queue: 'cleaning-glass',
})
public async cleanGlass(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
}
#Injectable()
export class FixingService {
#RabbitSubscribe({
exchange: 'app',
routingKey: 'cars',
queue: 'fixing-cars',
})
public async fixCar(msg: {}) {
console.log(`Received message: ${JSON.stringify(msg)}`);
}
}
With this setup both the cleaning service and the fixing service will receive the car message to their individual handlers (since they use the same routing key) and only the cleaning service will receive the glass message
Publishing message is simple. You just include the exchange and routing key and the right handlers will receive it based on their configuration:
amqpConnection.publish('app', 'cars', { year: 2020, make: 'toyota' });

Axon - Cannot emit query update in different microservice

I'm bothering with situation when I want to emit query update via queryUpdateEmitter but in different module (microservice). I have application built upon microservices and both are connected to the same Axon Server. First service creates subscriptionQuery, and sends some commands. After a while (through few commands and events) second service handles some event, and emits update for firstly subscribed query. Unfortunately it seems like this emit doesn't get to subscriber. Queries are exactly the same and sits in the same packages.
Subscription:
#GetMapping("/refresh")
public Mono<MovieDTO> refreshMovies() {
commandGateway.send(
new CreateRefreshMoviesCommand(UUID.randomUUID().toString()));
SubscriptionQueryResult<MovieDTO, MovieDTO> refreshedMoviesSubscription =
queryGateway.subscriptionQuery(
new GetRefreshedMoviesQuery(),
ResponseTypes.instanceOf(MovieDTO.class),
ResponseTypes.instanceOf(MovieDTO.class)
);
return refreshedMoviesSubscription.updates().next();
}
Emitter:
#EventHandler
public void handle(DataRefreshedEvent event) {
log.info("[event-handler] Handling {}, movieId={}",
event.getClass().getSimpleName(),
event.getMovieId());
queryUpdateEmitter.emit(GetRefreshedMoviesQuery.class, query -> true,
Arrays.asList(
MovieDTO.builder().aggregateId("as").build(),
MovieDTO.builder().aggregateId("be").build()));
}
This situation is even possible in the newest version of Axon? Similar configuration but within one service is working as expected.
#Edit
I have found a workardound for this situation:
Second service instead of emitting query via queryUpdateEmitter, publishes event with list of movies
First service handles this event and then emits update via queryUpdateEmitter
But still I'd like to know if there is a way to do this using queries only, because it seems natural to me (commandGateways/eventGateways works as expected, queryUpdateEmitter is the exception).
This follows from the implementation of the QueryUpdateEmitter (regardless of using Axon Server yes/no).
The QueryUpdateEmitter stores a set of update handlers, referencing the issued subscription queries. It however only maintains the issued subscription queries handled by the given JVM (as the QueryUpdateEmitter implementation is not distributed).
It's intent is to be paired in the component (typically a Query Model "projector") which answers queries about a given model, updates the model and emits those updates.
Hence, placing the QueryUpdateEmitter operations in a different (micro)service as where the query is handled will not work.

Return initial data on subscribe event in django graphene subscriptions

I'm trying to response to user on subscribe. By example, in a chatroom when an user connect to subscription, the subscription responses him with data (like a welcome message), but only to same user who just connect (no broadcast).
How can I do that? :(
Update: We resolve to use channels. DjangoChannelsGraphqlWs does not allow direct back messages.
Take a look at this DjangoChannelsGraphQL example. Link points to the part which is there to avoid "user self-notifications" (avoid user being notified about his own actions). You can use the same trick to send notification only to the user who made the action, e.g. who just subscribed.
Modified publish handler could look like the following:
def publish(self, info, chatroom=None):
new_msg_chatroom = self["chatroom"]
new_msg_text = self["text"]
new_msg_sender = self["sender"]
new_msg_is_greetings = self["is_greetings"]
# Send greetings message only to the user who caused it.
if new_msg_is_greetings:
if (
not info.context.user.is_authenticated
or new_msg_sender != info.context.user.username
):
return OnNewChatMessage.SKIP
return OnNewChatMessage(
chatroom=chatroom, text=new_msg_text, sender=new_msg_sender
)
I did not test the code above, so there could be issues, but I think it illustrates the idea quite well.

Gathering coin volumes - Is my code running asynchronously?

I'm fairly new to programming in python, I've been programming for about half a year. I've decided to try to build a functional trading bot. While trying to code this bot, I stumbled upon the asyncio module. I would really like to understand the module better but it's hard finding any simple tutorials or documentation about asyncio.
For my script I'm gathering per coin the volume. This works perfectly, but it takes a really long time to gather all the volumes. I would like to ask if my script is running synchronously, and if so how do I fix this? I'm using an API wrapper to communicate with the Binance Exchange.
import binance
import asyncio
import time
s = time.time()
names = [name for name in binance.ticker_prices()] #Gathering all the coin names
loop = asyncio.get_event_loop()
async def get_volume(name):
async def get_data():
return binance.ticker_24hr(name) #Returns per coin a dict of the data of the last 24hr
data = await get_data()
return (name, data['volume'])
tasks = [asyncio.ensure_future(get_volume(name)) for name in names]
results = loop.run_until_complete(asyncio.gather(*tasks))
print('Total time:', time.time() - s)
Since binance.ticker_24hr does not look like it's a coroutine, it is almost certainly blocking the event loop and therefore preventing asyncio.gather to do its job. As a quick fix, you can use run_in_executor to run the blocking function in a separate thread:
async def get_volume(name):
loop = asyncio.get_event_loop()
data = await loop.run_in_executor(None, binance.ticker_24hr, name)
return name, data['volume']
This will work just fine for a reasonable number of parallel tasks. The downside is that it uses threads, so it might not scale to a huge number of parallel requests (or it would require unnecessary waiting). The correct solution in the long run is to use a library that natively supports asyncio.
Maarten firstly you are calling get_ticker for every symbol which means you're making many unnecessary requests. If you call it without a symbol value, you get all tickers in one request. This removes any loops or async as well if you aren't performing other tasks. It looks like the binance library you're using doesn't support this. You can use python-binance to do it
return client.get_ticker()
That said I've been testing an asyncio version of python-binance. It's currently in a feature branch now if you want to try it.
pip install git+https://github.com/sammchardy/python-binance#feature/asyncio
Include the asyncio version of the client and initialise the client
from binance.client_async import AsyncClient as Client
client = Client("<api_key>", "<api_secret>")
Then you can await the calls to get the ticker for a particular symbol
return await client.get_ticker(symbol=name)
Or for all symbol tickers don't pass the symbol parameter
return await client.get_ticker()
Hope that helps

How to architecture a web-socket server with client subscription of specific responses in Phoenix?

I'm developing a web-socket server that I need to send real-time messages using Phoenix Framework to my clients.
The basic idea of my web-socket server is that a client can subscribe for some type of information and expect to receive only it, other clients would never receive it unless they subscribe to it too, the same information is broadcasted to every (and only) client subscribed to it in real-time.
Also, these information are separated in categories and sub categories, going down to 4 levels of categories.
So, for example, let's say I have 2 types of category information CatA, and CatB, each category can have sub categories, so CatA can have CatA.SubCatA and CatA.SubCatB sub categories, each sub categories can also have other subcategories and so on.
These information are generated by services, one for each root category (they handle all the information for the subcategories too), so we have CatAService and CatBService. These services needs to run as the server starts, always generating new information and broadcasting it to anyone that is subscribed to it.
Now, I have clients that will try to subscribe to these information, my solution for now is to have a channel for each information type available, so a client can join a channel to receive information of the channel's type.
For that I have something like that in the js code:
let channel = socket.channel("CatA:SubCatA:SubSubCatA", {})
channel.join()
channel.on("new_info", (payload) => { ... }
In this case, I would have a channel that all clients interested in SubSubCatA from SubCatA from CatA can join and a service for CatA that would generate and broadcast the information for all it's sub categories and so on.
I'm not sure if I was able to explain exactly what I want, but if something is not clear, please tell me what so I can better explain it, also, I made this (very bad) image as an example of how all the communication would happen https://ibb.co/fANKPb .
Also, note that I could only have one channel for each category and broadcast all the subcategories information for everyone that joined that category channel, but I'm very concerned about performance and network bandwidth, So my objective is to only send the information to only the clients that requested it.
Doing some tests here, it seems that If the client joins the channel as shown in the js code above, I can do this:
MyServerWeb.Endpoint.broadcast "CatA:SubCatA:SubSubCatA", "new_info", message
and that client (and all the other clients listening to that channel, but only then) will receive that message.
So, my question is divided in two parts, one is more generic and is what are the correct ways to achieve what I described above.
The second is if the solution I already came up is a good way to solve this since I'm not sure if the length of the string "CatA:SubCatA:SubSubCatA" creates an overhead when the server parses it or if there is some other limitation that I'm not aware.
Thanks!
You have to make separate channels for each class of clients and depending upon the ids which you are getting, you can broadcast the messages after checking about the clients joining the channel
def join("groups:" <> group_slug, _params, socket) do
%{team_id: team_id, current_user: user} = socket.assigns
case Repo.get_by(Group, slug: group_slug, team_id: team_id) do
nil ->
{:error, %{message: "group not found"}}
group ->
case GroupAuthorization.can_view?(group.id, user.id) do
true ->
messages = MessageQueries.group_latest_messages(group.id, user)
json = MessageView.render("index.json", %{messages: messages})
send self(), :after_join
{:ok, %{messages: json}, assign(socket, :group, group)}
false ->
{:error, %{message: "unauthorized"}}
end
end
end
This is an example of sending messages only to the users in groups which are subscribed and joined to the group. Hope this helps.

Resources