Multiple Send filters with KafkaFactoryConfigurator - masstransit

is it possible to configure Kafka Rider to use more than one SendFilter? When I'm looking to KafkaFactoryConfigurator it can register only one delegate to configure SendFilter.
What I want to do. First, I communicate with SpringBoot application over Kafka topics, and I need to send same headers name for i.e. MessageId. And I want to also send some Business context information with additional headers. I don't want to mix these two concerns in one filter. But I don't know how to setup two filters for Kafka Rider.
I also tried to setup filters on InMemmoryBus but it looks that these filters are not used together with Rider Send.
Is there a way how to do it?
I'm using MassTransit v8.
Thank you.
EDIT:
My setup looks like this sample
builder.Services.AddMassTransit(x =>
{
x.UsingInMemory((context, config)=> {
//config.UseSendFilter(typeof(KBHeaderFilter<>), context);
});
x.AddRider(rider =>
{
rider.AddProducer<MessageV1>("to-poc-masstransit", (riderContext, producerConfig) =>
{
var schemaRegistryClient = riderContext.GetRequiredService<ISchemaRegistryClient>();
var serializerConfig = new AvroSerializerConfig{...};
producerConfig.SetValueSerializer(new AvroSerializer<MessageV1>(schemaRegistryClient, serializerConfig).AsSyncOverAsync());
});
rider.UsingKafka((context, k) =>
{
//k.UseSendFilter(typeof(TestScopedFilter<>), context);
k.SetHeadersSerializer(new TestSerializer());
k.Host("localhost:port");
});
});
});
I want to find a single place where I can override sent headers. But it looks like that appropriate place is the place where serialization/deserialization is made. This place is not the same for standard brokers and riders and I will have to implement my own header serializer for Kafka rider and another for Artemis broker.

Related

Kafka streams: Using the DSL api, within a transform, how can I send two messages to different topics/separate DSL downstream processors

I'm using the DSL api and I have a use case where I need to check a condition and then if true, send an additional message to a separate topic from the happy path. My question is, how can I attach child processors to parents in the DSL api? Is it as simple as caching a stream variable and using it in two subsequent places, and naming those stream processors? Here's some brief code that explains what I'm trying to do. I am using the DSL api because I need the use of the foreignKeyJoin.
var myStream = stream.process(myProcessorSupplier); //3.3 returns a stream
stream.to("happyThingTopic"); Q: will the forward ever land here?
stream.map( myKvMapper, new Named("what-is-this")).to("myOtherTopic"); //will the forward land here?
public KeyValue<String, Object> process(Object key, Object value){
if (value.hasFlag){
processorContext.forward(key, new OtherThing(), "what-is-this?");
}
return new KeyValue(key, HappyThing(value));
}

With Elsa Workflow 2.0 how to create an activity based on HttpEndpoint

We're trying to use Elsa for a project, but we're facing some difficulties now, so need suggestions badly. One thing we're trying to do is to create an Activity based on existing HttpEndpoint. However, with the source code got from https://github.com/elsa-workflows/elsa-core, after googled some docs and samples, we haven't been able to figure it out.
Here is what exactly we're attempting to do.
create a new Activity based on HttpEndpoint
make the Path include WorkflowInstanceId by default
a little bit more customizations needed in our scenario
Looking forward to suggestions and guidance. Thanks!
You could do something like what the Webhooks module is doing. Instead of
inheriting from HttpEndpoint, it uses an IActivityTypeProvider implementation that dynamically yields new activity types that reuse HttpEndpoint.
In your case, your activity type provider would only have to yield a single activity type (e.g. MyEndpoint) that pre-configures any and all aspects that you want, including the default Path property value.
Deriving a new activity type from HttpEndpoint directly works too. You will have to implement your own bookmark provider that provides HttpEndpointBookmark objects - the HTTP middleware for the HttpEndpoint activity relies on that.
Example:
public class MyEndpointBookmarkProvider : BookmarkProvider<HttpEndpointBookmark>
{
public override bool SupportsActivity(BookmarkProviderContext context) => context.ActivityType.TypeName == nameof(MyEndpoint);
public override async ValueTask<IEnumerable<BookmarkResult>> GetBookmarksAsync(BookmarkProviderContext context, CancellationToken cancellationToken)
{
var path = await context.ReadActivityPropertyAsync<MyEndpoint, PathString>(x => x.Path, cancellationToken);
var methods = (await context.ReadActivityPropertyAsync<MyEndpoint, HashSet<string>>(x => x.Methods, cancellationToken))?.Select(ToLower) ?? Enumerable.Empty<string>();
BookmarkResult CreateBookmark(string method) => Result(new(path, method), nameof(HttpEndpoint));
return methods.Select(CreateBookmark);
}
private static string ToLower(string s) => s.ToLowerInvariant();
}
The above bookmark provider provides bookmarks for your custom activity when the activity being indexed is of type "MyEndpoint".
Alternatively, you might go a different route altogether and simply implement an API endpoint (an ASP.NET Core controller, middleware or route endpoint) that triggers workflows based on your custom activity. I've written some documentation about that process here: https://elsa-workflows.github.io/elsa-core/docs/next/guides/guides-blocking-activities

How to log transcripts of both User and bot

I've enabled transcript logging by
Use(new TranscriptLoggerMiddleware(new AzureBlobTranscriptStore(settings.BlobStorage.ConnectionString, settings.BlobStorage.Container)));
It only stores User messages though. How do I make it log bot answers too?
Is there a way to convert a bunch of JSON files into readable line-by line transcript like one user sees in the webchat?
I don't see why this would make a difference, but this is how I'm setup. Fundamentally, I'd say it's not really different than your setup. Are there any other settings, configurations or middleware you are passing thru that you have enabled and might be interfering?
const transcriptStore = new AzureBlobTranscriptStore({
storageAccountOrConnectionString: process.env.blobStorageConnectionString,
containerName: process.env.blobStorageContainer
});
const transcriptMiddleware = new TranscriptLoggerMiddleware(transcriptStore);
const adapter = new BotFrameworkAdapter(adapterSettings)
.use(transcriptMiddleware);
Hope of help!
I learned from this answer to create my own middleware that stores the incoming and outgoing activities in the database. I store them in Ms SQL, then I use sendConversationHistory to send the stored activities to the webchat.
https://stackoverflow.com/a/54228225/10531724
If you need more clarification please let me know.

Connecting to multiple cores at runtime

Is there a way to define a connection to a new Solr core on the fly, based on dynamic data?
We have a scenario where our Solr installation has multiple Cores/Indexes for the same type of document, separated by date (so a given week's documents will be on Index 1, the previous week's on Index 2, etc).
So when I receive my query, I check to see the required date range, and based on it, I want to query a specific core. I don't know in advance, at startup, which cores I will have, since new ones can be created dynamically during runtime.
Using the built-in ServiceLocation provider, there's no way to link two different Cores to the same document class. But even if I use a different DI container (currently Autofac in my case), I still need to specify all Core URLs in advance, during component registration.
Is there a way to bypass it except for always creating a new Autofac Container, generating the ISolrOperation<> class from it, and releasing it until the next time I need to connect to a core?
Mauricio Scheffer (developer of Solr.Net)'s comment confirmed that there's no built-in support for connecting to different index URLs on the fly. So instead of instantiating the internal objects myself, I used a hack on top of my existing Autofac based DI container:
public ISolrOperations<TDocument> ConnectToIndex<TDocument>(string indexUrl)
{
// Create a new AutoFac container environment.
ContainerBuilder builder = new ContainerBuilder();
// Autofac-for-Solr.Net config element.
var cores = new SolrServers
{
new SolrServerElement
{
Id = indexUrl,
DocumentType = typeof (TDocument).AssemblyQualifiedName,
Url = indexUrl,
}
};
// Create the Autofac container.
builder.RegisterModule(new SolrNetModule(cores));
var container = builder.Build();
// Resolve the SolrNet object for the URL.
return container.Resolve<ISolrOperations<TDocument>>();
}

What's the difference between relying on Application's event mixin and Application.vent?

What's the point of Application.vent in Marionette? The Application object already extends Backbone.Events, so I can write the following:
window.app = new Backbone.Marionette.Application();
app.on("my:event", function() { console.log(arguments); });
app.trigger("my:event");
More easily than:
window.app = new Backbone.Marionette.Application();
app.vent.on("my:event", function() { console.log(arguments); });
app.vent.trigger("my:event");
I've read the source and I can't tell the difference, but that doesn't mean there isn't one, and I'm half-willing to bet there's a good reason it's done the way it is.
While Application.vent's functionality does overlap Application's built-in events, it adds more functionality than just a simple on/trigger event mechanism because it's an instance of Backbone.Wreqr. This adds command events and a request/response mechanism to allow modules to communicate to each other more easily.
It's still just events at the heart of it, but it aims to make inter-module communication a little easier to follow.

Resources