Here's a proto definition for a service that consumes a stream of events
from a client
message Event {
// ...
}
service EventService {
rpc Publisher(stream Event) returns (google.protobuf.Empty);
}
The problem is that the server needs to be told what to do with this stream.
Ideally, it would first recieve an Options message:
message Event {
// ...
}
message Options {
// ...
}
service EventService {
rpc Publisher(Options, stream Event) returns (google.protobuf.Empty);
}
However, grpc only supports one parameter for rpc methods.
One solution is to introduce an additional PublishMessage message which
can contain either an Options or Event message.
message PublishMessage {
oneof content {
Options options = 1;
Event event = 2;
}
}
The service would then expect the first PublishMessage to contain an Options message, with all subsequent ones containing Event messages. This introduces additional overhead from the wrapping message and makes the api a little clunky.
Is there a cleaner way to achieve the same result?
Using oneof is the suggested approach when many fields or messages are in play. The overhead is minimal, so wouldn't generally be a concern. There is the clunkiness though.
If there's only a few fields, you may want to combine the fields from Options and Event into a single message. Or similarly add Options to Event as a field. You'd expect the Options fields to be present on the first request and missing from subsequent. This works better when there's fewer configuration fields, like just a "name."
Related
I'm creating few microservices using nestjs.
For instance I have x, y & z services all interconnected by grpc but I want service x to send updates to a webapp on a particular entity change so I have considered server-sent-events [open to any other better solution].
Following the nestjs documentation, they have a function running at n interval for sse route, seems to be resource exhaustive. Is there a way to actually sent events when there's a update.
Lets say I have another api call in the same service that is triggered by a button click on another webapp, how do I trigger the event to fire only when the button is clicked and not continuously keep sending events. Also if you know any idiomatic way to achieve this which getting hacky would be appreciated, want it to be last resort.
[BONUS Question]
I also considered MQTT to send events. But I get a feeling that it isn't possible for a single service to have MQTT and gRPC. I'm skeptical of using MQTT because of its latency and how it will affect internal message passing. If I could limit to external clients it would be great (i.e, x service to use gRPC for internal connections and MQTT for webapp just need one route to be exposed by mqtt).
(PS I'm new to microservices so please be comprehensive about your solutions :p)
Thanks in advance for reading till end!
You can. The important thing is that in NestJS SSE is implemented with Observables, so as long as you have an observable you can add to, you can use it to send back SSE events. The easiest way to work with this is with Subjects. I used to have an example of this somewhere, but generally, it would look something like this
#Controller()
export class SseController {
constructor(private readonly sseService: SseService) {}
#SSE()
doTheSse() {
return this.sseService.sendEvents();
}
}
#Injectable()
export class SseService {
private events = new Subject();
addEvent(event) {
this.events.next(event);
}
sendEvents() {
return this.events.asObservable();
}
}
#Injectable()
export class ButtonTriggeredService {
constructor(private readonly sseService: SseService) {}
buttonClickedOrSomething() {
this.sseService.addEvent(buttonClickedEvent);
}
}
Pardon the pseudo-code nature of the above, but in general it does show how you can use Subjects to create observables for SSE events. So long as the #SSE() endpoint returns an observable with the proper shape, you're golden.
There is a better way to handle events with SSE of NestJS:
Please see this repo with code example:
https://github.com/ningacoding/nest-sse-bug/tree/main/src
Where basically you have a service:
import {Injectable} from '#nestjs/common';
import {fromEvent} from "rxjs";
import {EventEmitter} from "events";
#Injectable()
export class EventsService {
private readonly emitter = new EventEmitter();
subscribe(channel: string) {
return fromEvent(this.emitter, channel);
}
emit(channel: string, data?: object) {
this.emitter.emit(channel, {data});
}
}
Obviously, channel can be any string, as recommendation use path style.
For example: "events/for/<user_id>" and users subscribed to that channel will receive only the events for that channel and only when are fired ;) - Fully compatible with #UseGuards, etc. :)
Additional note: Don't inject any service inside EventsService, because of a known bug.
#Sse('sse-endpoint')
sse(): Observable<any> {
//data have to strem
const arr = ['d1','d2', 'd3'];
return new Observable((subscriber) => {
while(arr.len){
subscriber.next(arr.pop()); // data have to return in every chunk
}
if(arr.len == 0) subscriber.complete(); // complete the subscription
});
}
Yes, this is possible, instead of using interval, we can use event emitter.
Whenever the event is emitted, we can send back the response to the client.
I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.
I have a service, which receives different structured messages from different message queues. Having #StreamListener conditions we can choose at every message type how that message should be handled. As an example:
We receive two different types of messages, which have different header fields and values e.g.
Incoming from "order" queue:
Order1: { Header: {catalog:groceries} }
Order2: { Header: {catalog:tools} }
Incoming from "shipment" queue:
Shipment1: { Header: {region:Europe} }
Shipment2: { Header: {region:America} }
There is a binding for each queue, and with according #StreamListener I can process the messages by catalog and region differently
e.g.
#StreamListener(target = OrderSink.ORDER_CHANNEL, condition = "headers['catalog'] == 'groceries'")
public void onGroceriesOrder(GroceryOder order){
...
}
So the question is, how to achieve this with the new Spring Cloud Function approach?
At the documentation https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.2.RELEASE/reference/html/spring-cloud-stream.html#_event_routing it is mentioned:
Also, for SpEL, the root object of the evaluation context is Message so you can do evaluation on individual headers (or message) as well ….routing-expression=headers['type']
Is it possible to add the routing-expression to the binding like (in application.yml)
onGroceriesOrder-in-0:
destination: order
routing-expression: "headers['catalog']==groceries"
?
EDIT after first answer
If the above expression at this location is not possible, what the first answer implies, than my question goes as follows:
As far as I understand, an expression like routing-expression: headers['catalog'] must be set globally, because the result maps to certain (consumer) functions.
How can I control that the 2 different messages on each queue will be forwarted to their own consumer function, e.g.
Order1 --> MyOrderService.onGroceriesOrder()
Order2 --> MyOrderService.onToolsOrder()
Shipment1 --> MyShipmentService.onEuropeShipment()
Shipment2 --> MyShipmentService.onAmericaShipment()
That was easy with #StreamListener, because each method gets their own #StreamListener annotation with different conditions. How can this be achieved with the new routing-expression setting?
?
Aside from the fact that the above is not a valid expression, but I think you meant headers['catalog']==groceries. If so, what would you expect to happen from evaluating it as the only two option could be true/false. Anyway, these are rhetorical but helps to understand the problem and how to fix it.
The expression must result in a value of a function to route TO. So. . .
routing-expression: headers['catalog'] - assumes that the actual value of catalog header is the name of the function to invoke
routing-expression: headers['catalog']==groceries ? 'processGroceries' : 'processOther' - maps value 'groceries' to 'processGroceries' function.
For a specific routing, you can use MessageRoutingCallback strategy:
MessageRoutingCallback
The MessageRoutingCallback is a strategy to assist with determining
the name of the route-to function definition.
public interface MessageRoutingCallback {
FunctionRoutingResult routingResult(Message<?> message);
. . .
}
All you need to do is implement and register it as a bean to be picked
up by the RoutingFunction. For example:
#Bean
public MessageRoutingCallback customRouter() {
return new MessageRoutingCallback() {
#Override
FunctionRoutingResult routingResult(Message<?> message) {
return new FunctionRoutingResult((String) message.getHeaders().get("func_name"));
}
};
}
Spring Cloud Function
I am experimenting with using Apache Camel to implement a TCP game server. It will accept bi-directional, synchronous telnet or SSH connections from multiple human or bot players.
The communication "protocol" is a bit crude, and based on legacy infrastructure that's already in place from an earlier version. Basically, the client and server exchange I/O over a socket (one connection per client).
Usually, this consists of one-line command strings, or one-line response strings. However, in some cases the input or output can span multiple line breaks before it is considered "complete" and ready for the other side's response. So my plan is to:
Create a TCP socket server using Spring Boot and Apache Camel, with the latter's "Netty4" component.
Use aggregation to collect the incoming lines of text from a socket connection. Roll them up into messages of one or more lines, depending on the type of input detected.
Pass the resulting message to an endpoint, which parses the input and returns the appropriate response back to the socket.
I can show any other code or Spring config, but the heart of my question seems to be the route I'm declaring:
#Component
public class EchoRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
// "sync=true" seems necessary to return any response to the client at all
//
// "textline=true&autoAppendDelimiter=false" seem necessary to properly handle
// the socket input at newline-terminated strings, rather than processing
// input byte-by-byte
from("netty4:tcp://localhost:4321?sync=true&textline=true&autoAppendDelimiter=false")
// This line, and the corresponding `.header("incoming")` line below, are
// perhaps a bit dodgy. I'm assuming that all messages on the route
// from a given client socket are already effectively "correlated", and
// that messages from multiple client sockets are not inter-mingled
// here. So I'm basically wildcard-ing the correlation mechanism. If my
// assumption is wrong, then I'm not sure how to correlate by
// client socket.
.setHeader("incoming", constant(true))
// Taken from numerous examples I've seen in Camel books and website
// pages. Just concatenates the correlated messages until
// completion occurs.
.aggregate(new AggregationStrategy() {
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
if (oldExchange == null) {
return newExchange;
}
final String oldBody = oldExchange.getIn().getBody(String.class);
final String newBody = newExchange.getIn().getBody(String.class);
oldExchange.getIn().setBody(oldBody + newBody);
return oldExchange;
}
})
// See comment on "setHeader(...) above.
.header("incoming")
// In this initial testing, aggregation of a particular message is
// considered complete when the last line received is "EOM".
.completionPredicate(exchange -> {
final String body = exchange.getIn().getBody(String.class);
final boolean done = body.endsWith("EOM");
return done;
})
// This endpoint will eventually parse the aggregated message and
// perform logic on it. Right now, it just returning the input message
// with a prefix.
.to("bean:echoService");
}
}
When I start my server, and telnet to port 4321 from a separate terminal window, I can verify in the debugger that:
The .completetionPredicate(...) logic is being invoked upon each line of input as expected, and
The echoService endpoint is being invoked as expected after an EOM line of input. The message passed to the endpoint contains the expected aggregated content.
However, there are two problems:
The server is echoing each line of input back to the client connection, rather than letting the endpoint determine the response content.
The server is not sending the endpoint return value to the client. I log it to the server console, but otherwise it's silently discarded.
Any suggestions on what I might be missing here? The desired behavior is for the route to send the endpoint's return value to the client socket, and nothing but the endpoint's return value. Thanks!
I'm trying to understand the proper way to use Windows.Foundation.Diagnostics.LoggingChannel. In particular I'd like to understand the purpose behind the Level property and when is this property set.
As described in the MSDN documentation of LoggingChannel, the Level property is read-only. So how can I set the level that a channel accepts messages at?
Currently what I have designed as a logger for my app is something like below:
public class Logger
{
public LoggingLevel LoggerLoggingLevel { get; set; }
private LoggingSession _session;
private LoggingChannel _channel;
public Logger()
{
_channel = new LoggingChannel("MyChannel");
_session = new LoggingSession("MySession");
_session.AddLoggingChannel(_channel);
}
public void LogMessage(string msg, LoggingLevel level)
{
if (level >= LoggerLoggingLevel)
{
_channel.LogMessage(msg, level);
}
}
.
.
.
}
// The consumer of the Logger class will instantiate an instance of it,
// sets the LoggerLoggingLevel, and then starts logging messages at various levels.
// At any point, the consumer can change LoggerLoggingLevel to start accepting
// messages at different levels.
IS this the right approach or is there a better way (for example by somehow setting the level of _channel and then passing the message & level to the channel, letting the channel decide whether it should filter out the message or accept and log it)?
LoggingChannel.Level tells you "somebody has expressed interest in receiving messages from you that are of severity 'Level' or higher". This property will be set automatically by the runtime when somebody subscribes to events from your LoggingChannel instance. (Within your app, you can subscribe to your app's events using the LoggingSession class; outside of your app, you can record your app's events using a tool like tracelog or xperf.)
In simple scenarios, you don't need to worry about the value of LoggingChannel.Level. The LoggingChannel.LogMessage function already checks the value of LoggingChannel.Level. It also checks the value of LoggingChannel.Enabled, which tells you whether anybody is subscribed to your events at any level. (Note that the value of LoggingChannel.Level is UNDEFINED and MEANINGLESS unless LoggingChannel.Enabled is true.) In normal use, you don't need to worry about LoggingChannel.Enabled or LoggingChannel.Level -- just call LogMessage and let LoggingChannel check the levels for you.
LoggingChannel exposes the Enabled and Level properties to support a more complex scenario where it is expensive to gather the data you are about to log. In this case, you would probably like to skip gathering the data if nobody is listening for your event. You would then write code like this:
if (channel.Enabled && channel.Level <= eventLevel)
{
string expensiveData = GatherExpensiveData();
channel.LogMessage(expensiveData, eventLevel);
}
Note that the Windows 10 version of LoggingChannel added a bunch of new methods to make life a bit easier. If your program will run on Windows 10 or later, you can use the IsEnabled method instead of separate checks for Enabled and Level:
if (channel.IsEnabled(eventLevel))
{
string expensiveData = GatherExpensiveData();
channel.LogMessage(expensiveData, eventLevel);
}
A bunch of other stuff was also added to LoggingChannel for Windows 10. You can now log complex events (strongly-typed fields) instead of just strings, you can define keywords and opcodes (look up ETW documentation for more information), and you can basically have your LoggingChannel act like a first-class ETW citizen.