MassTransit StateMachine - How to publish messages, where the message type can be identified only at runtime - masstransit

I have StateMachine, After each EventCompletion and State Changes. It publishes one or more Commands using StateMachine.Context.Publish like below.
if (req.GetType() == typeof(ValuationRun_WfReq))
{
var obj = PreparePayLoad<ValuationRun_WfReq>(wfTask, req, context.Instance.CorrelationId);
obj.CorrelationId = context.Instance.CorrelationId;
await context.Publish(obj);
}
else if (req.GetType() == typeof(MonthEndStep2_WfReq))
{
var obj = PreparePayLoad<MonthEndStep2_WfReq>(wfTask, req, context.Instance.CorrelationId);
obj.CorrelationId = context.Instance.CorrelationId;
await context.Publish(obj);
}
else if (req.GetType() == typeof(MonthEndStep5_WfReq))
{
var obj = PreparePayLoad<MonthEndStep5_WfReq>(wfTask, req, context.Instance.CorrelationId);
obj.CorrelationId = context.Instance.CorrelationId;
await context.Publish(obj);
}
As each published message should go to its own queue. I have to expicitly add the If Conditions to check the Type and Publish with that Type.
This way I have more than 20 Type and adding If Conditions for each message type.
This I have to change to identify required Message Type from AppSettings and publish the right queue.

Related

Capturing ElasticsearchSink Exceptions in Flink

I've recently been encountering some issues that I've noticed in the logs
of my Flink job that handles writing to an Elasticsearch index. I was
hoping to leverage some of the metrics that Flink exposes (or piggyback on
them) to update metric counters when I encounter specific kinds of errors.
val builder = ElasticsearchSink.Builder(...)
builder.setFailureHandler { actionRequest, throwable, _, _ ->
// Log error here (and update metrics via metricGroup.counter(...)
}
return builder.build()
Currently, I don't have any "context" when the callback for the setFailureHandler occurs, and while I can log it, ideally I'd like to expose a metric to track how frequently this is occurring:
builder.setFailureHandler ( actionRequest, throwable, _, _ ->
elasticExceptionsCounter.inc()
}
One additional wrinkle here is that my specific scenario relies on dynamically creating and handling these sinks via a router like the following:
class DynamicElasticsearchSink<ElementT, RouteT, SinkT : ElasticsearchSinkBase<ElementT, out AutoCloseable>>(
private val sinkRouter: ElasticsearchSinkRouter<ElementT, RouteT, SinkT>
) : RichSinkFunction<ElementT>(), CheckpointedFunction {
// Store a reference to all of the current routes
private val sinkRoutes: MutableMap<RouteT, SinkT> = ConcurrentHashMap()
private lateinit var configuration: Configuration
override fun open(parameters: Configuration) {
configuration = parameters
}
override fun invoke(value: ElementT, context: SinkFunction.Context) {
val route = sinkRouter.getRoute(value)
var sink = sinkRoutes[route]
if (sink == null) {
// Build a new sink for this key and cache it for later use based on incoming records
sink = sinkRouter.createSink(route, value)
sink.runtimeContext = runtimeContext
sink.open(configuration)
sinkRoutes[route] = sink
}
sink.invoke(value, context)
}
// Omitted for brevity
}
and the sinkRouter.createSink() looks like the following:
override fun createSink(cacheKey: String, element: JsonObject): ElasticsearchSink<JsonObject> {
return buildSinkFromRoute(element)
}
private fun buildSinkFromRoute(element: JsonObject): ElasticsearchSink<JsonObject> {
val builder = ElasticsearchSink.Builder(
buildHostsFromElement(element),
ElasticsearchRoutingFunction()
)
// Various configuration omitted for brevity
builder.setFailureHandler { actionRequest, throwable, _, _ ->
// Here's where I'd like to capture the failures and record them as metrics
}
return builder.build()
}
Is there a way to support this currently or what options are available for handing this?

Does Swift's Combine framework have a sample(on:) operator similar to those in RXSwift or Reactive Swift?

Does anyone know how to recreate sampling behavior in Combine?
Here's a diagram of the sample's behavior in RXMarbles
The gist of sample() is that there are two streams, when one is triggered, the latest value of the other stream is sent if it already hasn't been sent.
The CombineExt library has the withLatestFrom operator which does what you want, along with many other useful operators.
Here is a Playground that might do what you want. I didn't do a whole lot of testing on it so please proceed with caution:
import UIKit
import Combine
import PlaygroundSupport
struct SamplePublisher<DataSeq, Trigger, E> : Publisher
where DataSeq : Publisher,
Trigger : Publisher,
DataSeq.Failure == Trigger.Failure,
E == DataSeq.Failure,
DataSeq.Output : Equatable {
typealias Output = DataSeq.Output
typealias Failure = E
// The two sequences we are observing, the data sequence and the
// trigger sequence. When the trigger fires it will send the
// latest value from the dataSequence UNLESS it hasn't changed
let dataPublisher : DataSeq
let triggerPublisher : Trigger
struct SamplePublisherSubscription : Subscription {
var combineIdentifier = CombineIdentifier()
let dataSubscription : AnyCancellable
let triggerSubscription : Subscription
func request(_ demand: Subscribers.Demand) {
triggerSubscription.request(demand)
}
func cancel() {
dataSubscription.cancel()
triggerSubscription.cancel()
}
}
func receive<S>(subscriber: S) where S : Subscriber, E == S.Failure, DataSeq.Output == S.Input {
var latestData : DataSeq.Output?
var lastSent : DataSeq.Output?
var triggerSubscription : Subscription?
// Compares the latest value sent to the last one that was sent.
// If they don't match then it sends the latest value along.
// IF they do match, or if no value has been sent on the data stream yet
// Don't emit a new value.
func emitIfNeeded() -> Subscribers.Demand {
guard let latest = latestData else { return .unlimited }
if nil == lastSent ||
lastSent! != latest {
lastSent = latest
return subscriber.receive(latest)
} else {
return .unlimited
}
}
// Here we watch the data stream for new values and simply
// record them. If the data stream ends, or erors we
// pass that on to our subscriber.
let dataSubscription = dataPublisher.sink(
receiveCompletion: {
switch $0 {
case .finished:
subscriber.receive(completion: .finished)
case .failure(let error):
subscriber.receive(completion: .failure(error))
}
},
receiveValue: {
latestData = $0
})
// The thing that subscribes to the trigger sequence.
// When it receives a value, we emit the latest value from the data stream (if any).
// If the trigger stream ends or errors, that will also end or error this publisher.
let triggerSubscriber = AnySubscriber<Trigger.Output,Trigger.Failure>(
receiveSubscription: { subscription in triggerSubscription = subscription },
receiveValue: { _ in emitIfNeeded() },
receiveCompletion: {
switch $0 {
case .finished :
emitIfNeeded()
subscriber.receive(completion: .finished)
case .failure(let error) :
subscriber.receive(completion: .failure(error))
}
})
// subscribe to the trigger sequence
triggerPublisher.subscribe(triggerSubscriber)
// Record relevant information and return the subscription to the subscriber.
subscriber.receive(subscription: SamplePublisherSubscription(
dataSubscription: dataSubscription,
triggerSubscription: triggerSubscription!))
}
}
extension Publisher {
// A utility function that lets you create a stream that is triggered by
// a value being emitted from another stream
func sample<Trigger, E>(trigger: Trigger) -> SamplePublisher<Self, Trigger, E>
where Trigger : Publisher,
Self.Failure == Trigger.Failure,
E == Self.Failure,
Self.Output : Equatable {
return SamplePublisher( dataPublisher : self, triggerPublisher : trigger)
}
}
var count = 0
let timer = Timer.publish(every: 5.0, on: RunLoop.current, in: .common).autoconnect().eraseToAnyPublisher()
let data = Timer.publish(every: 1.0, on: RunLoop.current, in: .common)
.autoconnect()
.scan(0) { total, _ in total + 1}
var subscriptions = Set<AnyCancellable>()
data.sample(trigger: timer).print()
.sink(receiveCompletion: {
debugPrint($0)
}, receiveValue: {
debugPrint($0)
}).store(in: &subscriptions)
PlaygroundSupport.PlaygroundPage.current.needsIndefiniteExecution = true

Handle Subscription in vertx GraphQL

I tried to use Vertx HttpClient/WebClient to consume the GraphQLSubscritpion but it did not work as expected.
The server-side related code(written with Vertx Web GraphQL) is like the following, when a comment is added, then trigger onNext to send the comment to the Publisher.
public VertxDataFetcher<UUID> addComment() {
return VertxDataFetcher.create((DataFetchingEnvironment dfe) -> {
var commentInputArg = dfe.getArgument("commentInput");
var jacksonMapper = DatabindCodec.mapper();
var input = jacksonMapper.convertValue(commentInputArg, CommentInput.class);
return this.posts.addComment(input)
.onSuccess(id -> this.posts.getCommentById(id.toString())
.onSuccess(c ->subject.onNext(c)));
});
}
private BehaviorSubject<Comment> subject = BehaviorSubject.create();
public DataFetcher<Publisher<Comment>> commentAdded() {
return (DataFetchingEnvironment dfe) -> {
ConnectableObservable<Comment> connectableObservable = subject.share().publish();
connectableObservable.connect();
return connectableObservable.toFlowable(BackpressureStrategy.BUFFER);
};
}
In the client, I mixed to use the HttpClient/WebClient, most of the time, I would like to use WebClient, which easier for handling form post. But it seems it does not work have a WebSocket connection.
So the websocket part is returning to use HttpClient.
var options = new HttpClientOptions()
.setDefaultHost("localhost")
.setDefaultPort(8080);
var httpClient = vertx.createHttpClient(options);
httpClient.webSocket("/graphql")
.onSuccess(ws -> {
ws.textMessageHandler(text -> log.info("web socket message handler:{}", text));
JsonObject messageInit = new JsonObject()
.put("type", "connection_init")
.put("id", "1");
JsonObject message = new JsonObject()
.put("payload", new JsonObject()
.put("query", "subscription onCommentAdded { commentAdded { id content } }"))
.put("type", "start")
.put("id", "1");
ws.write(messageInit.toBuffer());
ws.write(message.toBuffer());
})
.onFailure(e -> log.error("error: {}", e));
// this client here is WebClient.
client.post("/graphql")
.sendJson(Map.of(
"query", "mutation addComment($input:CommentInput!){ addComment(commentInput:$input) }",
"variables", Map.of(
"input", Map.of(
"postId", id,
"content", "comment content of post id" + LocalDateTime.now()
)
)
))
.onSuccess(
data -> log.info("data of addComment: {}", data.bodyAsString())
)
.onFailure(e -> log.error("error: {}", e));
When running the client and server, the comment is added, but the WebSocket client does not print any info about websocket message. On the server console, there is an message like this.
2021-06-25 18:45:44,356 DEBUG [vert.x-eventloop-thread-1] graphql.GraphQL: Execution '182965bb-80de-416d-b5fe-fe157ab87f1c' completed with zero errors
It seems the backend commentAdded datafetcher is not invoked at all.
The complete codes of GraphQL client and server are shared on my Github.
After reading some testing codes of Vertx Web GraphQL, I found I have to add the ConnectionInitHandler on ApolloWSHandler like this.
.connectionInitHandler(connectionInitEvent -> {
JsonObject payload = connectionInitEvent.message().content().getJsonObject("payload");
if (payload != null && payload.containsKey("rejectMessage")) {
connectionInitEvent.fail(payload.getString("rejectMessage"));
return;
}
connectionInitEvent.complete(payload);
}
)
When the client sends connection_init message, the connectionInitEvent.complete is required to start the communication between the client and the server.

How can I make a batch send transaction?

I'm having trouble sending tokens to many addresses simultaneously. As far as I know, Cosmos does not support batch sending, so I have to make one tx for each recipient, and make sure the account sequence (nonce) is correct for each tx.
So if I specify the account sequence manually, I can create many send transactions at once - but I still have to wait for the node to confirm one tx and update the account sequence before sending the next one.
What can one do in this case?
Cosmos actually does support a MultiSend operation on the bank module, unfortunately it is not wired into the clients for support typically however it is widely used by exhanges such as Coinbase to optimize transfers.
https://github.com/cosmos/cosmos-sdk/blob/e957fad1a7ffd73712cd681116c9b6e09fa3e60b/x/bank/keeper/msg_server.go#L73
I have since been reminded that a single Cosmos transaction can consist of multiple Messages, where each Message is a message to the bank module to send tokens to one account. That's how batch sending works in Cosmos.
Unfortunately this is not available via the command line, so one has to use the SDK to make such a transaction.
Here's a link to CosmJS demonstration of sending a transaction with two messages and two signatures on the v0.39.x LTS release of the Cosmos SDK here (hat tip Ethan Frey from Confio for pointing this out)
In case the link changes at any point here's the code:
describe("appendSignature", () => {
it("works", async () => {
pendingWithoutLaunchpad();
const wallet0 = await Secp256k1HdWallet.fromMnemonic(faucet.mnemonic, makeCosmoshubPath(0));
const wallet1 = await Secp256k1HdWallet.fromMnemonic(faucet.mnemonic, makeCosmoshubPath(1));
const client0 = new SigningCosmosClient(launchpad.endpoint, faucet.address0, wallet0);
const client1 = new SigningCosmosClient(launchpad.endpoint, faucet.address1, wallet1);
const msg1: MsgSend = {
type: "cosmos-sdk/MsgSend",
value: {
from_address: faucet.address0,
to_address: makeRandomAddress(),
amount: coins(1234567, "ucosm"),
},
};
const msg2: MsgSend = {
type: "cosmos-sdk/MsgSend",
value: {
from_address: faucet.address1,
to_address: makeRandomAddress(),
amount: coins(1234567, "ucosm"),
},
};
const fee = {
amount: coins(2000, "ucosm"),
gas: "160000", // 2*80k
};
const memo = "This must be authorized by the two of us";
const signed = await client0.sign([msg1, msg2], fee, memo);
const cosigned = await client1.appendSignature(signed);
expect(cosigned.msg).toEqual([msg1, msg2]);
expect(cosigned.fee).toEqual(fee);
expect(cosigned.memo).toEqual(memo);
expect(cosigned.signatures).toEqual([
{
pub_key: faucet.pubkey0,
signature: jasmine.stringMatching(base64Matcher),
},
{
pub_key: faucet.pubkey1,
signature: jasmine.stringMatching(base64Matcher),
},
]);
// Ensure signed transaction is valid
const broadcastResult = await client0.broadcastTx(cosigned);
assertIsBroadcastTxSuccess(broadcastResult);
});
});
For doing this kind of thing in Golang, like with the Cosmos SDK client CLI, you can take the distribution module as an example: https://github.com/cosmos/cosmos-sdk/blob/master/x/distribution/client/cli/tx.go#L165-L180
msgs := make([]sdk.Msg, 0, len(validators))
for _, valAddr := range validators {
val, err := sdk.ValAddressFromBech32(valAddr)
if err != nil {
return err
}
msg := types.NewMsgWithdrawDelegatorReward(delAddr, val)
if err := msg.ValidateBasic(); err != nil {
return err
}
msgs = append(msgs, msg)
}
chunkSize, _ := cmd.Flags().GetInt(FlagMaxMessagesPerTx)
return newSplitAndApply(tx.GenerateOrBroadcastTxCLI, clientCtx, cmd.Flags(), msgs, chunkSize)

How to create new chat room with play framework websocket?

I tried the chat example with websocket in play framework 2.6.x. It works fine. Now for the real application, I need to create multiple chat rooms based on user requests. And users will be able to access different chatrooms with an id or something. I think it might related to create a new flow for each room. Related code is here:
private val (chatSink, chatSource) = {
val source = MergeHub.source[WSMessage]
.log("source")
.map { msg =>
try {
val json = Json.parse(msg)
inputSanitizer.sanText((json \ "msg").as[String])
} catch {
case e: Exception => println(">>" + msg)
"Malfunction client"
}
}
.recoverWithRetries(-1, { case _: Exception ⇒ Source.empty })
val sink = BroadcastHub.sink[WSMessage]
source.toMat(sink)(Keep.both).run()
}
private val userFlow: Flow[WSMessage, WSMessage, _] = {
Flow.fromSinkAndSource(chatSink, chatSource)
}
But I really don't know how to create new flow with id and access it later. Can anyone help me on this?
I finally figured it out. Post the solution here in case anyone has similar problems.
My solution is to use the AsyncCacheApi to store Flows in cache with keys. Generate a new Flow when necessary instead of creating just one Sink and Source:
val chatRoom = cache.get[Flow[WSMessage, WSMessage, _]](s"id=$id")
chatRoom.map{room=>
val flow = if(room.nonEmpty) room.get else createNewFlow
cache.set(s"id=$id", flow)
Right(flow)
}
def createNewFlow: Flow[WSMessage, WSMessage, _] = {
val (chatSink, chatSource) = {
val source = MergeHub.source[WSMessage]
.map { msg =>
try {
inputSanitizer.sanitize(msg)
} catch {
case e: Exception => println(">>" + msg)
"Malfunction client"
}
}
.recoverWithRetries(-1, { case _: Exception ⇒ Source.empty })
val sink = BroadcastHub.sink[WSMessage]
source.toMat(sink)(Keep.both).run()
}
Flow.fromSinkAndSource(chatSink, chatSource)
}

Resources