Send message only to certain client using websockets with Rsocket and Spring Webflux - websocket

I am trying to use Rsocket with websocket in one of my POC projects. In my case user login is not required. I would like to send a message only to certain clients when I receive a message from another service. Basically, my flow goes like this.
Service A Service B
|--------| websocket |------------------| Queue based comm |---------------|
| Web |----------------->| Rsocket server |--------------------->| Another |
| |<-----------------| using Websocket |<---------------------| service |
|--------| websocket |------------------| Queue based comm |---------------|
In my case, I am thinking to use a unique id for each connection and each request. Merge both identifiers as correlation id and send the message to Service B and when I get the message from Service B figure which client it needs to go to and send it. Now I understand I may not need 2 services to do this but I am doing this for a few other reasons. Though I have a rough idea about how to implement other pieces. I am new to the Rsocket concept. Is it possible to send a message to the only certain client by a certain id using Spring Boot Webflux, Rsocket, and websocket?

Basically, I think you have got two options here. The first one is to filter the Flux which comes from Service B, the second one is to use RSocketRequester and Map as #NikolaB described.
First option:
data class News(val category: String, val news: String)
data class PrivateNews(val destination: String, val news: News)
class NewsProvider {
private val duration: Long = 250
private val externalNewsProcessor = DirectProcessor.create<News>().serialize()
private val sink = externalNewsProcessor.sink()
fun allNews(): Flux<News> {
return Flux
.merge(
carNews(), bikeNews(), cosmeticsNews(),
externalNewsProcessor)
.delayElements(Duration.ofMillis(duration))
}
fun externalNews(): Flux<News> {
return externalNewsProcessor;
}
fun addExternalNews(news: News) {
sink.next(news);
}
fun carNews(): Flux<News> {
return Flux
.just("new lambo!!", "amazing ferrari!", "great porsche", "very cool audi RS4 Avant", "Tesla i smarter than you")
.map { News("CAR", it) }
.delayElements(Duration.ofMillis(duration))
.log()
}
fun bikeNews(): Flux<News> {
return Flux
.just("specialized enduro still the biggest dream", "giant anthem fast as hell", "gravel long distance test")
.map { News("BIKE", it) }
.delayElements(Duration.ofMillis(duration))
.log()
}
fun cosmeticsNews(): Flux<News> {
return Flux
.just("nivea - no one wants to hear about that", "rexona anti-odor test")
.map { News("COSMETICS", it) }
.delayElements(Duration.ofMillis(duration))
.log()
}
}
#RestController
#RequestMapping("/sse")
#CrossOrigin("*")
class NewsRestController() {
private val log = LoggerFactory.getLogger(NewsRestController::class.java)
val newsProvider = NewsProvider()
#GetMapping(value = ["/news/{category}"], produces = [MediaType.TEXT_EVENT_STREAM_VALUE])
fun allNewsByCategory(#PathVariable category: String): Flux<News> {
log.info("hello, getting all news by category: {}!", category)
return newsProvider
.allNews()
.filter { it.category == category }
}
}
The NewsProvider class is a simulation of your Service B, which should return Flux<>. Whenever you call the addExternalNews it's going to push the News returned by the allNews method. In the NewsRestController class, we filter the news by category. Open the browser on localhost:8080/sse/news/CAR to see only car news.
If you want to use RSocket instead, you can use a method like this:
#MessageMapping("news.{category}")
fun allNewsByCategory(#DestinationVariable category: String): Flux<News> {
log.info("RSocket, getting all news by category: {}!", category)
return newsProvider
.allNews()
.filter { it.category == category }
}
Second option:
Let's store the RSocketRequester in the HashMap (I use vavr.io) with #ConnectMapping.
#Controller
class RSocketConnectionController {
private val log = LoggerFactory.getLogger(RSocketConnectionController::class.java)
private var requesterMap: Map<String, RSocketRequester> = HashMap.empty()
#Synchronized
private fun getRequesterMap(): Map<String, RSocketRequester> {
return requesterMap
}
#Synchronized
private fun addRequester(rSocketRequester: RSocketRequester, clientId: String) {
log.info("adding requester {}", clientId)
requesterMap = requesterMap.put(clientId, rSocketRequester)
}
#Synchronized
private fun removeRequester(clientId: String) {
log.info("removing requester {}", clientId)
requesterMap = requesterMap.remove(clientId)
}
#ConnectMapping("client-id")
fun onConnect(rSocketRequester: RSocketRequester, clientId: String) {
val clientIdFixed = clientId.replace("\"", "") //check serialezer why the add " to strings
// rSocketRequester.rsocket().dispose() //to reject connection
rSocketRequester
.rsocket()
.onClose()
.subscribe(null, null, {
log.info("{} just disconnected", clientIdFixed)
removeRequester(clientIdFixed)
})
addRequester(rSocketRequester, clientIdFixed)
}
#MessageMapping("private.news")
fun privateNews(news: PrivateNews, rSocketRequesterParam: RSocketRequester) {
getRequesterMap()
.filterKeys { key -> checkDestination(news, key) }
.values()
.forEach { requester -> sendMessage(requester, news) }
}
private fun sendMessage(requester: RSocketRequester, news: PrivateNews) {
requester
.route("news.${news.news.category}")
.data(news.news)
.send()
.subscribe()
}
private fun checkDestination(news: PrivateNews, key: String): Boolean {
val list = destinations(news)
return list.contains(key)
}
private fun destinations(news: PrivateNews): List<String> {
return news.destination
.split(",")
.map { it.trim() }
}
}
Note that we have to add two things in the rsocket-js client: a payload in SETUP frame to provide client-id and register the Responder, to handle messages sent by RSocketRequester.
const client = new RSocketClient({
// send/receive JSON objects instead of strings/buffers
serializers: {
data: JsonSerializer,
metadata: IdentitySerializer
},
setup: {
//for connection mapping on server
payload: {
data: "provide-unique-client-id-here",
metadata: String.fromCharCode("client-id".length) + "client-id"
},
// ms btw sending keepalive to server
keepAlive: 60000,
// ms timeout if no keepalive response
lifetime: 180000,
// format of `data`
dataMimeType: "application/json",
// format of `metadata`
metadataMimeType: "message/x.rsocket.routing.v0"
},
responder: responder,
transport
});
For more information about that please see this question: How to handle message sent from server to client with RSocket?

I didn't yet personally use RSocket with WebSocket transport, but as stated in RSocket specification underlying transport protocol shouldn't even be important.
One RSocket component is at the same time server and a client. So when browsers connect to your RSocket "server" you can inject the RSocketRequester instance which you can then use to send messages to the "client".
You can then add these instances in your local cache (e.g. put them in some globally available ConcurrentHashMap with key of your choosing - something from which you'll know/be able to calculate to which clients should the message from Service B be propagated).
Then in the code where you receive message from Service B just fetch all RSocketRequester instances from the local cache which match your criteria and send them the message.

Related

Webflux subscribe to nested Publishers and serialize them to JSON

I have a UserDto with related items taken from repository which is Project-Reactor based, thus returns Flux/Mono publishers.
My idea was to add fields/getters to DTO, which themselves are publishers and lazily evaluate them (subscribe) on demand, but there is a problem:
Controller returns Flux of DTOs, all is fine, except spring doesn't serialize inner Publishers
What I'm trying to achieve in short:
#Repository
class RelatedItemsRepo {
static Flux<Integer> findAll() {
// simulates Flux of User related data (e.g. Orders or Articles)
return Flux.just(1, 2, 3);
}
}
#Component
class UserDto {
// Trying to get related items as field
Flux<Integer> relatedItemsAsField = RelatedItemsRepo.findAll();
// And as a getter
#JsonProperty("related_items_as_method")
Flux<Integer> relatedItemsAsMethod() {
return RelatedItemsRepo.findAll();
}
// Here was suggestion to collect flux to list and return Mono
// but unfortunately it doesn't make the trick
#JsonProperty("related_items_collected_to_list")
Mono<List<Integer>> relatedItemsAsList() {
return RelatedItemsRepo.findAll().collectList();
}
// .. another user data
}
#RestController
#RequestMapping(produces = MediaType.APPLICATION_JSON_VALUE)
public class MyController {
#GetMapping
Flux<UserDto> dtoFlux() {
return Flux.just(new UserDto(), new UserDto(), new UserDto());
}
}
And this is the response I get:
HTTP/1.1 200 OK
Content-Type: application/json
transfer-encoding: chunked
[
{
"related_items_as_method": {
"prefetch": -1,
"scanAvailable": true
},
"related_items_collected_to_list": {
"scanAvailable": true
}
},
{
"related_items_as_method": {
"prefetch": -1,
"scanAvailable": true
},
"related_items_collected_to_list": {
"scanAvailable": true
}
},
{
"related_items_as_method": {
"prefetch": -1,
"scanAvailable": true
},
"related_items_collected_to_list": {
"scanAvailable": true
}
}
]
It seems like Jackson doesn't serialize Flux properly and just calls .toString() on it (or something similar).
My question is: Is there existing Jackson serializers for Reactor Publishers or should I implement my own, or maybe am I doing something conceptually wrong.
So in short: how can I push Spring to evaluate those fields (subscribe to them)
If I understand correctly, what you try to achieve is to create an API that needs to respond with the following response:
HTTP 200
[
{
"relatedItemsAsField": [1,2,3]
},
{
"relatedItemsAsField": [1,2,3]
},
{
"relatedItemsAsField": [1,2,3]
}
]
I would collect all the elements emitted by the Flux generated by RelatedItemsRepo#findAll by using Flux#collectList, then map this to set the UserDto object as required.
Here is a gist.

Simple Spring GraphQL Subscription returning error

I'm trying to create a simple Spring GraphQL subscription handler. Here's my controller:
#Controller
public class GreetingController {
#QueryMapping
String sayHello() {
return "Hello!";
}
#SubscriptionMapping
Flux<String> greeting(#Argument int count) {
return Flux.fromStream(Stream.generate(() -> "Hello # " + Instant.now()))
.delayElements(Duration.ofSeconds(1))
.take(count);
}
}
Here's the GraphQL schema:
type Query {
sayHello: String
}
type Subscription {
greeting(count: Int): String
}
Spring configuration:
spring:
graphql:
graphiql:
enabled: true
path: /graphiql
When I try to run above subscription using graphiql hosted by the spring I receive following error:
{
"errors": [
{
"isTrusted": true
}
]
}
When I run the same graphql request using Postman I receive following response:
{
"data": {
"upstreamPublisher": {
"scanAvailable": true,
"prefetch": -1
}
}
}
What is causing the subscription not to return data from my controller?
As explained in the linked GitHub issue, a subscription requires the ability to stream data within a persistent transport connection - this is not available other plain HTTP.
You'll need to enable WebSocket support in your application first. The GraphiQL UI should use the WebSocket transport transparently for this.

Quarkus SmallRye Graphql-Client Mutation Query

I try to execute a Graphql Client Query. Sadly I am not able to find any kind of documentation or examples on how to do a simple Mutation using the Dynamic Graph QL Client. Here is the documentation https://quarkus.io/guides/smallrye-graphql-client.
mutation mut {
add(p: {
amount: {0},
fromCurrencyId: {1},
reference: {2},
terminalKey: {3},
toCurrencyId: {4}
}) {
address
toCurrencyAmount
rate
createdAt
expireAt
}
}
{0}..{4} are variable place holder.
Does someone know how to execute this query with the DynamicGraphlQlClient?
Thanks!
Having declared your server side mutation following the Eclipse MicroProfile API as follows:
#GraphQLApi
#ApplicationScoped
public class MyGraphQLApi {
#Mutation
public OutputType add(#Name("p") InputType p)) {
// perform your mutation and return result
}
}
You can then use the DynamicGraphQLClient declaratively to perform the mutation using the DynamicGraphQLClient#executeSync method with a io.smallrye.graphql.client.core.Document constructed after your mutation structure:
#Inject
private DynamicGraphQLClient client;
public void add() {
Document document = Document.document(
operation(
OperationType.MUTATION,
"mut",
field(
"add",
arg(
"p",
inputObject(
prop("amount", "amountValue"),
prop("fromCurrencyId", "fromCurrencyIdValue"),
prop("reference", "referenceValue"),
prop("terminalKey", "terminalKeyValue"),
prop("toCurrencyId", "toCurrencyIdValue")
)
)),
field("address"),
field("toCurrencyAmount"),
field("rate"),
field("createdAt"),
field("expireAt")
)
);
JsonObject data = client.executeSync(document).getData();
System.out.println(data.getString("address"));
}

Publish events with MassTransit

I'm trying to publish message in one microservice and get it in another one, but cannot implement this using MassTransit 5.5.3 with RabbitMQ.
As far as I know we don't have to create a ReceiveEndpoint to be able to publish event so I'm just creating the same message interface in both services and publish a message, but as I can see in RabbitMQ it either goes to nowhere (if doesn't mapped to a queue) or goes to "_skipped" queue.
Publisher:
namespace Publisher
{
class Program
{
static async Task Main(string[] args)
{
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
IRabbitMqHost host = cfg.Host("host", "vhost", h =>
{
h.Username("xxx");
h.Password("yyy");
});
});
bus.Start();
await bus.Publish<Message>(new { Text = "Hello World" });
Console.ReadKey();
bus.Stop();
}
}
public interface Message
{
string Text { get; set; }
}
}
Consumer:
namespace Consumer
{
class Program
{
static async Task Main(string[] args)
{
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
IRabbitMqHost host = cfg.Host("host", "vhost", h =>
{
h.Username("xxx");
h.Password("yyy");
});
cfg.ReceiveEndpoint(host, e =>
{
e.Consumer<MbConsumer>();
});
});
bus.Start();
bool finish = false;
while(!finish)
{
await Task.Delay(1000);
}
bus.Stop();
}
}
public interface Message
{
string Text { get; set; }
}
public class MbConsumer : IConsumer<Message>
{
public async Task Consume(ConsumeContext<Message> context)
{
await Console.Out.WriteLineAsync(context.Message.Text);
}
}
}
I'm expeting the consumer to get the message once it's been published but it doesn't get it. I think this is because the full message types are different ("Publisher.Message" vs. "Consumer.Message") so message contract is different. How should I fix this code to get the event in the consumer? Looks like I'm missing some fundamental thing about RabbitMQ or MassTransit.
Your guess is correct. MassTransit uses the fully qualified class name as the message contract name. MassTransit also uses type-based routing, so FQCNs are used to create exchanges and bindings.
So, if you move your message class to a separate namespace like:
namespace Messages
{
public interface Message
{
string Text { get; set; }
}
}
You then can reference this type when you publish a message
await bus.Publish<Messages.Message>(new { Text = "Hello World" });
and define your consumer
public class MbConsumer : IConsumer<Messages.Message>
{
public async Task Consume(ConsumeContext<Message> context)
{
await Console.Out.WriteLineAsync(context.Message.Text);
}
}
it will work.
You might want also to look at RMQ management UI to find out about MassTransit topology. With your code, you will see two exchanges, one Publisher.Message and another one Consumer.Message where your consumer queue is bound to the Consumer.Message exchange, but you publish messages to the Publisher.Message exchange and they just vanish.
I would also suggest specifying a meaningful endpoint name for your receive endpoint:
cfg.ReceiveEndpoint(host, "MyConsumer", e =>
{
e.Consumer<MbConsumer>();
});

GraphQL Relay Mutation Config RANGE_ADD's parentName for connections

I have a page that uses GraphQL Relay connection which fetches drafts.
query {
connections {
drafts(first: 10) {
edges {
node {
... on Draft {
id
}
}
}
}
}
}
In this page, I also create draft through CreateDraftMutation.
mutation {
createDraft(input: {
clientMutationId: "1"
content: "content"
}) {
draft {
id
content
}
}
}
After this mutation, I want Relay to add the created draft into its store. The best candidate for mutation config is RANGE_ADD, which is documented as following:
https://facebook.github.io/relay/docs/guides-mutations.html
RANGE_ADD
Given a parent, a connection, and the name of the newly created edge in the response payload Relay will add the node to the store and attach it to the connection according to the range behavior specified.
Arguments
parentName: string
The field name in the response that represents the parent of the connection
parentID: string
The DataID of the parent node that contains the connection
connectionName: string
The field name in the response that represents the connection
edgeName: string
The field name in the response that represents the newly created edge
rangeBehaviors: {[call: string]: GraphQLMutatorConstants.RANGE_OPERATIONS}
A map between printed, dot-separated GraphQL calls in alphabetical order, and the behavior we want Relay to exhibit when adding the new edge to connections under the influence of those calls. Behaviors can be one of 'append', 'ignore', 'prepend', 'refetch', or 'remove'.
The example from the documentation goes as the following:
class IntroduceShipMutation extends Relay.Mutation {
// This mutation declares a dependency on the faction
// into which this ship is to be introduced.
static fragments = {
faction: () => Relay.QL`fragment on Faction { id }`,
};
// Introducing a ship will add it to a faction's fleet, so we
// specify the faction's ships connection as part of the fat query.
getFatQuery() {
return Relay.QL`
fragment on IntroduceShipPayload {
faction { ships },
newShipEdge,
}
`;
}
getConfigs() {
return [{
type: 'RANGE_ADD',
parentName: 'faction',
parentID: this.props.faction.id,
connectionName: 'ships',
edgeName: 'newShipEdge',
rangeBehaviors: {
// When the ships connection is not under the influence
// of any call, append the ship to the end of the connection
'': 'append',
// Prepend the ship, wherever the connection is sorted by age
'orderby(newest)': 'prepend',
},
}];
}
/* ... */
}
If the parent is as obvious as faction, this is a piece of cake, but I've been having hard time identifying parentName and parentID if it came directly from query connections.
How do I do this?
Edit:
This is how query was exported.
export default new GraphQLObjectType({
name: 'Query',
fields: () => ({
node: nodeField,
viewer: {
type: viewerType,
resolve: () => ({}),
},
connections: {
type: new GraphQLObjectType({
name: 'Connections',
which in return is used in the relay container
export default Relay.createContainer(MakeRequestPage, {
fragments: {
connections: () => Relay.QL`
fragment on Connections {
I've been having hard time identifying parentName and parentID if it
came directly from query connections.
faction and ships in the example from Relay documentation are exactly the same as connections and drafts in your case. Each GraphQLObject has an ID,so does your connections object. Therefore, for your mutation, parentName is connctions and parentID is the ID of connections.
query {
connections {
id
drafts(first: 10) {
edges {
node {
... on Draft {
id
}
}
}
}
}
}
By the way, I guessconnections and drafts are terms from your application domain. Otherwise, connections confuses with GraphQL connection type.

Resources