A way to work in Spring Integration (/Dsl) with DestinationResolvers - spring

Can I configure a single JmsMessageDrivenChannelAdapter so it is able to work with different destinations, via DestinationResolvers or such? I'd like to provide the destination logic via the IntegrationFlows builder, so I can reuse the component (I don't need to create one adapter per topic), or centralize all destinations sources/decision rules in a single class

You can do it like this:
IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(jmsConnectionFactory())
.destination("DUMMY")
.configureListenerContainer(c ->
c.destinationResolver((session, s, b) ->
YOUR LOGIC FOR DYNAMIC DESTINATION RESOLUTION)))
You need that "DUMMY" destination configuration to mock the container state:
protected void validateConfiguration() {
if (this.destination == null) {
throw new IllegalArgumentException("Property 'destination' or 'destinationName' is required");
}
}
OTOH I'm not sure that it is going to work properly anyway.
The container starts a JMS Consumer based on the destination (even if you provide it via the custom DestinationResolver) and it can't be changed until the container stop.
You can consider to use Jms.inboundAdapter() though, which is pollable, but based on the JmsTemplate.receiveSelected(). That way you can change a destination on each receive() invocation from the poller.
You will need dummy destinationName configuration there anyway. Otherwise it doesn't go to the getDestinationResolver().

Related

Main processing flow programmatic approach when using Spring Integration with Project Reactor

I want to define a flow that consumes kafka with Reactor Kafka and writes to MongoDB, and only on success writes the IDs to Kafka. I'm using the Project Reactor with Spring Integration JavaDSL, and I'd wish to have a FlowBuilder class that defines my pipeline at a high level. I currently have the following direction:
public IntegrationFlow buildFlow() {
return IntegrationFlows.from(reactiveKafkaConsumerTemplate)
.publishSubscribeChannel(c -> c
.subscribe(sf -> sf
.handle(MongoDb.reactiveOutboundChannelAdapter()))
.handle(writeToKafka)
.get();
}
I've seen in the docs that there is a support for a different approach, that also works with Project Reactor. This approach doesn't include the use of IntegrationFlows. This looks like this:
#MessagingGateway
public static interface TestGateway {
#Gateway(requestChannel = "promiseChannel")
Mono<Integer> multiply(Integer value);
}
...
#ServiceActivator(inputChannel = "promiseChannel")
public Integer multiply(Integer value) {
return value * 2;
}
...
Flux.just("1", "2", "3", "4", "5")
.map(Integer::parseInt)
.flatMap(this.testGateway::multiply)
.collectList()
.subscribe(integers -> ...);
I'd like to know what is more of a recommended way of processing when working with these two libraries. I wonder how can I use the Reactive MongoDB adapter in the second example. I'm not sure if the second approach is even possible without an IntegrationFlows wrapper.
The #MessagingGateway was designed for high-level end-user API, to hide messaging underneath as much as possible. So, the target service is free from any messaging abstraction when you develop its logic.
It is possible to use such an interface adapter from the IntegrationFlow and you should treat it as regular service activator therefore it would look like this:
.handle("testGateway", "multiply", e -> e.async(true))
The async(true) to make this service activator to subscribe to the returned Mono. You may omit this then you are on your own to subscriber to it downstream since exactly this Mono is going to be a payload for the next message in the flow.
If you want to have something opposite: call an IntegrationFlow from the Flux, like that flatMap(), then consider to use a toReactivePublisher() operator from the flow definition to return a Publisher<?> and declare it as a bean. In this case it is better to not use that MongoDb.reactiveOutboundChannelAdapter(), but just ReactiveMongoDbStoringMessageHandler to let its returned Mono to be propagated to that Publisher.
On the other hand if you want to have that #MessagingGateway with the Mono return, but still call from it a ReactiveMongoDbStoringMessageHandler, then declare it as a bean and mark it with that #ServiceActivator.
We also have an ExpressionEvaluatingRequestHandlerAdvice to catch errors (or success) on the particular endpoint and handle them respectively: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#expression-advice
I think what you are looking for is like this:
public IntegrationFlow buildFlow() {
return IntegrationFlows.from(reactiveKafkaConsumerTemplate)
.handle(reactiveMongoDbStoringMessageHandler, "handleMessage")
.handle(writeToKafka)
.get();
}
Pay attention to the .handle(reactiveMongoDbStoringMessageHandler) - it is not about a MongoDb.reactiveOutboundChannelAdapter(). Because this one wraps a ReactiveMessageHandler into a ReactiveMessageHandlerAdapter for automatic subscription. What you need is look more like you'd like to have that Mono<Void> returned to your own control, so you can use it as an input into your writeToKafka service and subscribe there yourself and handle success or error as you explained. The point is that with Reactive Stream we cannot provide an imperative error handling. The approach is the same like with any async API usage. So, we send errors to the errorChannel for Reactive Streams, too.
We probably can improve that MongoDb.reactiveOutboundChannelAdapter() with something like returnMono(true/false) to let the use-case like your to be available out-of-the-box.

Spring Boot - Camel - Tracking an exchange all the way through

We are trying to setup a very simple auditing database table for a very complex Spring Boot, Camel application with many routes (mostly internal routes using seda://)...the idea being we record in the database table each route's processing outcome. Then when issues arise we can login to the database, query the table and pinpoint exactly where the issue happened. I thought I could just use the exchange-id as the unique tracking identifier, but quickly learned that all the seda:// routes make new exchanges, or at least that's what I'm seeing (camel version 2.24.3). Frankly, I don't care what we use for the unique identifier...I can generate a UUID easily enough and the use the exchange.setProperty("id-unique", UUID).
I did manage to get something to work using the exchange.setProperty("id-exchange", exchange.getExchangeId()) and have it persist the unique identifier thru the routes...(I did read that certain pre-defined route prefixes such as jms:// will not persist exchange properties though). The thought being, the very first Processor places the exchangeId (unique-id) on the exchange properties, my tracking logic is in a processor that I can include as part of the Route's definition :
#Override
public void configure() throws Exception {
// EVENTS : Collect statistics from Camel events
this.getContext().getManagementStrategy().addEventNotifier(this.camelEventNotifier);
// INITIAL : ${body} exchange coming from a simple URL endpoint
// POST request with an XML Message...simulates an MQ
// message from Central MQ. The Web/UI service places the
// message onto the camel route using producerTemplate.
from("direct:" + Globals.ROUTEID_LBR_INTAKE_MQ)
.routeId(Globals.ROUTEID_LBR_INTAKE_MQ)
.description("Loss Backup Reports MQ XML inbound messages")
.autoStartup(false)
.process(processor)
.process(getTrackingProcessor())
.to("seda:" + Globals.ROUTEID_LBR_VALIDATION)
.end();
}
This Proof-of-Concept (POC) allowed me to at least get things tracking like we want...note multiple rows with the same unique identifier :
ID_ROW ID_EXCHANGE PROCESS_GROUP PROCESS_STEP RESULTS_STEP RESULTS_MESSAGE
1 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-intake-mq add lbr-intake-mq
2 ID-LIBP45P-322256M-1603188596161-4-6 Loss Backup Reports lbr-validation add lbr-intake-mq
Thing is, this POC is proving to be rigid and difficult to record outcomes such as SUCCESS versus EXCEPTION.
My question is, has anyone done anything like this? And if so, how was it implemented? Or is there a fancy way in Camel to handle this that I just couldn't find on the web?
My other ideas were :
Set an old fashion Abstract TrackerProcessor class that all my tracked Processors extend. Then just have a handful of methods in there to create, update, etc... Each processor then just calls inherited methods to create and manage the audit entries. The advantage here being the exchange is readily available with all the data involved to store in the database table.
#Component
public abstract class ProcessorAbstractTracker implements Processor {
#Override
abstract public void process(Exchange exchange) throws Exception;
public void createTracker ( Exchange exchange ) {
}
public void updateTracker ( Exchange exchange, String theResultsMessage, String theResultsStep ) {
}
}
Set an #Autowired Bean that every tracked Camel Processor wires in and put the tracking logic in the bean. This seems to be simple and clean. My only concern/question here is how to scope the bean (maybe prototype)...since there would be many routes utilizing the bean concurrently, is there any chance we get mixed processing values...
#Autowired
ProcessorTracker tracker;
Other ideas?
tia, adym

DDD: Where to raise "created" domain event

I struggle to find and implement the best practise for the following problem: where is the best location to raise create domain event (the event that notifies for the creation of an aggregate). For example if we have Order aggregate in our bounded context we would like to notifie all interested parties when order is create. The event could be OrderCreatedEvent.
What I tried in first place is to raise this event in the constructor (I have a collection of domain events in each aggregate). This way it is okay only when we create the order. Because when we would like to do anything with this aggregate in the future we are going to create new instance of it through the constructor. Then OrderCreatedEvent will be raised again but it is not true.
However, I thought it would be okey to raise the event in the application layer but it is an anti-pattern (the domain events should live only in the domain). Maybe to have a method Create that will just add the OrderCreatedEvent to its domain events list and call it in the application layer when order is created is an option.
Interesting fact I found on the internet is that it is an anti-pattern to raise domain events in the contructor which means the last described option (to have Create method) would be the best approach.
I am using Spring Boot for the application and MapStruct for the mapper that maps the database/repository Entity to the domain model aggregate. Also, tried to find a way to create a mapper that is going to skip the contructor of the target class but as all properties of the Order aggregate are private seems impossible.
Usually constructor are used only to make assignations on object's fields. This is not the right place to trigger behaviours, especially when they throw exceptions or have side effects
DDD theorists (from Eric Evans onwards) suggest implementing factories for aggregates creation. A factory method, for example, can invoke aggregate constructor (and wire up the aggregate with child domain objects as well) and also register an event.
Publishing events from the application layer is not an anti-pattern per se. Application services can depend from the domain events publisher, the important thing is that it's not application layer to decide which event to send
To summarize, with a stack like Java Spring Boot and domain events support, your code could look like
public class MyAggregate extends AbstractAggregateRoot {
public static MyAggregate create() {
MyAggregate created = new MyAggregate();
created.registerEvent(new MyAggregateCreated());
return created;
}
}
public class MyApplicationService {
#Autowired private MyAggregateRepository repository;
public void createAnAggregate() {
repository.save(MyAggregate.create());
}
}
notice that event publishing happens automagically after calling repository.save(). The downside here is that, when you use db-generated identifiers, aggregate id is not avaliable in the event payload since it's associated after persisting the aggregate. If i will change the application service code like that:
public class MyApplicationService {
#Autowired private MyAggregateRepository repository;
#Autowired private ApplicationEventPublisher publisher;
public void createAnAggregate() {
repository.save(MyAggregate.create()).domainEvents().forEach(evt -> {
publisher.publish(evt);
});
}
}
Application layer is in charge to decide what to do to fulfill this workflow (create an aggregate, persist it and send some event) but all the steps happen transparently. I can add a new property to the aggregate root, change DBMS or change event contract, this won't change these lines of code. Application layer decides what to do and domain layer decides how to do it. Factories are part of the domain layer, events are a transient part of the aggregate state and the publishing part is transparent from domain standpoint
Check out this question!
Is it safe to publish Domain Event before persisting the Aggregate?.
However, I thought it would be okey to raise the event in the application layer but it is an anti-pattern (the domain events should live only in the domain). - Domain events live in Domain layer, but Application layer references Domain layer and can easily emit domain events.

Spring Integration DSL adding mid flow transaction

I want to make specific part of flow as transactional. For instance, I want to make the first two transform operation in one transactional block. Here is the flow code that I use:
#Bean
public IntegrationFlow createNumberRange() {
return IntegrationFlows.from("npEventPubSubChannel")
.transform(...)
.transform(...)// should be transactional with above transform together
.transform(...) // non transactional
.handle((payload, headers) -> numbRepository.saveAll(payload))
.get();
}
I found a workaround as adding another handle and directing flow to transactional gateway like this one:
.handle("transactionalBean", "transactionalMetod") //Then implemented messagingGateway which consists of transactional method.
I also found mid flow transactional support but couldn't find an example to work on.
Is there an elegant solution rather than directing to another gateway in the middle of the flow?
If you want to wrap two transformers into the transaction, you don't have choice unless hide that call behind transactional gateway. That is fully similar when you do raw Java:
#Transactional
void myTransactionalMethod() {
transform1();
transform2();
}
I'm sure you are agree with me that we always have to do this way to have them both in the same transaction.
With Spring Integration Java DSL you can do this though:
.gateway(f -> f
.transform(...)
.transform(...),
e -> e.transactional())
Do you agree it is similar to what we have with the raw Java and not so bad from the elegance perspective?

Membership reboot replace Ninject with Simple Injector

I need add membership reboot (RavenDb) into the project that use IOC Simple Injector
Ninject implementation
var config = MembershipRebootConfig.Create();
kernel.Bind<MembershipRebootConfiguration<HierarchicalUserAccount>>().ToConstant(config);
kernel.Bind<UserAccountService<HierarchicalUserAccount>>().ToSelf(); kernel.Bind<AuthenticationService<HierarchicalUserAccount().To<SamAuthenticationService<HierarchicalUserAccount>>();
kernel.Bind<IUserAccountRepository<HierarchicalUserAccount>>().ToMethod(ctx => new BrockAllen.MembershipReboot.RavenDb.RavenUserAccountRepository("RavenDb"));
kernel.Bind<IUserAccountQuery>().ToMethod(ctx => new BrockAllen.MembershipReboot.RavenDb.RavenUserAccountRepository("RavenDb"));
Simple Injector implementation
container.Register(MembershipRebootConfig.Create);
container.Register<UserAccountService<HierarchicalUserAccount>>();
container.Register<AuthenticationService<HierarchicalUserAccount>, SamAuthenticationService<HierarchicalUserAccount>>();
container.Register<IUserAccountRepository<HierarchicalUserAccount>>(() => new RavenUserAccountRepository("RavenDb"), Lifestyle.Singleton);
container.Register<IUserAccountQuery>(() => new RavenUserAccountRepository("RavenDb"));
On row
container.Register<UserAccountService<HierarchicalUserAccount>>();
I have an error
For the container to be able to create UserAccountService, it should contain exactly one public constructor, but it has 2.
Parameter name: TConcrete
Thanks for your help.
Simple Injector forces you to let your components to have one single public constructor, because having multiple injection constructors is an anti-pattern.
In case the UserAccountService is part of your code base, you should remove the constructor that should not be used for auto-wiring.
In case the UserAccountService is part of a reusable library, you should prevent using your container's auto-wiring capabilities in that case as described here. In that case you should fallback to wiring the type yourself and let your code call into the proper constructor, for instance:
container.Register<UserAccountService<HierarchicalUserAccount>>(() =>
new UserAccountService<HierarchicalUserAccount>(
container.GetInstance<MembershipRebootConfiguration<HierarchicalUserAccount>>(),
container.GetInstance<IUserAccountRepository<HierarchicalUserAccount>>()));
I'm just going to include here how I converted the Ninject configuration to Simple Injector for the Single Tenant sample in the MembershipReboot repository (which I cloned). I thought that might be beneficial for anyone who was searching for how to go about this, as it may save them some time.
Firstly, the configuration in the Single Tenant sample's NinjectWebCommon class is:
var config = MembershipRebootConfig.Create();
kernel.Bind<MembershipRebootConfiguration>().ToConstant(config);
kernel.Bind<DefaultMembershipRebootDatabase>().ToSelf();
kernel.Bind<UserAccountService>().ToSelf();
kernel.Bind<AuthenticationService>().To<SamAuthenticationService>();
kernel.Bind<IUserAccountQuery>().To<DefaultUserAccountRepository>().InRequestScope();
kernel.Bind<IUserAccountRepository>().To<DefaultUserAccountRepository>().InRequestScope();
Now, I'll set out the whole SimpleInjectorInitializer class, which started with the one which was added to the project via the SimpleInjector.MVC3 Nuget package, and follow up with comments:
public static class SimpleInjectorInitializer
{
/// <summary>Initialize the container and register it as MVC3 Dependency Resolver.</summary>
public static void Initialize()
{
var container = new Container();
container.Options.DefaultScopedLifestyle = new WebRequestLifestyle();
container.RegisterMvcControllers(Assembly.GetExecutingAssembly());
InitializeContainer(container);
container.Verify();
DependencyResolver.SetResolver(new SimpleInjectorDependencyResolver(container));
}
private static void InitializeContainer(Container container)
{
Database.SetInitializer(new MigrateDatabaseToLatestVersion<DefaultMembershipRebootDatabase, BrockAllen.MembershipReboot.Ef.Migrations.Configuration>());
var config = MembershipRebootConfig.Create();
container.Register(() => config, Lifestyle.Singleton);
container.Register(() => new DefaultMembershipRebootDatabase(), Lifestyle.Scoped);
container.Register<IUserAccountQuery, DefaultUserAccountRepository>(Lifestyle.Scoped); // per request scope. See DefaultScopedLifestyle setting of container above.
container.Register<IUserAccountRepository, DefaultUserAccountRepository>(Lifestyle.Scoped);
container.Register(() => new UserAccountService(container.GetInstance<MembershipRebootConfiguration>(), container.GetInstance<IUserAccountRepository>()));
container.Register<AuthenticationService, SamAuthenticationService>();
var iUserAccountQueryRegistration = container.GetRegistration(typeof(IUserAccountQuery)).Registration;
var iUserAccountRepositoryRegistration = container.GetRegistration(typeof(IUserAccountRepository)).Registration;
iUserAccountQueryRegistration.SuppressDiagnosticWarning(DiagnosticType.TornLifestyle, "Intend for separate Objects");
iUserAccountRepositoryRegistration.SuppressDiagnosticWarning(DiagnosticType.TornLifestyle, "Intend for separate Objects");
}
}
Scoping the config to a Singleton with a factory func is pretty much the same as Ninject's ToConstant.
DefaultMembershipRebootDatabase is the obvious departure, but I honestly don't think it matters whether MR's DefaultMembershipRebootDatabase is scoped a transient or per web request. It calls SaveChanges every time an operation is performed e.g. Registering a user. It does not use larger, per request-bound tansactions. So, using the same DefaultMembershipRebootDatabase context later in the same request is no going to cause any weird MR issues.
HOWEVER, some thought will need to given to what happens if you want to create a Domain User during the same operation as you create a MR UserAccount. (A Domain User may contain more information beyond password stuff, like first and last names, DOB etc.). Tying an MR UserAccount to a Domain User (with additional user info such a name, address etc.) is a common use case. So what happens if the creation of the Domain User fails after creation of the MR UserAccount succeeded? I don't know. Perhaps as part of the rollback, you delete the MR user. But the registration email will already have been sent. So, these are the issues that you face here.
As you can see, in the Simple Tenant sample, Brock registers both IUserAccountRepository and IUserAccountQuery to DefaultUserAccountRepository. This is obviously by design and so we have to do that as well, if we want to use MR's UserAccountService and AuthenticationService. Thus, we need to suppress the Diagnostic warnings which would otherwise prevent the Container from Verifying.
Hope that all helps and by all means let me know if there are problems with my registrations.
Cheers

Resources