can ChannelInterceptor know channel name? - spring

In my Spring project I use a channel interceptor (registered with #GlobalChannelInterceptor annotation) to intercept messages on all channels. I want this interceptor to behave in a different way for different channels - for instance, for some channels it shoud add a header A, for some other channels it should add header B, etc. Of course, I could just use several different interceptors (one which adds header A, another which adds header B etc) registered for different channels, but in my case it's impossible. It's impossible, because I want this decision - which headers should be added for which channels - to be configurable. So now, when I write the code, I don't even know how many interceptors I would need, because this will be decided by the configuration. So I would like to have one interceptor and in its preSend method somehow check what is the channel name on which the current message is being sent, and then depending on this channel name make a decision which header should be added.
Is it possible? Is it possible for preSend method of a channel interceptor to know the channel name? Or is there any other way to achieve my goal?

See preSend() signature:
default Message<?> preSend(Message<?> message, MessageChannel channel)
Pay attention to that channel arg.
Normally, when ChannelInterceptor is used from the dependency injection Spring container, it is applied to MessageChannel beans. And if you deal with standard Spring Integration channels, then all of them are instances of NamedComponent.
So, you just need to cast that channel arg to NamedComponent and call its getBeanName() in your preSend() impl.
You may also double check with instanceof before casting because in core Spring Messaging their MessageChannel impl does not provide a hook to get bean name.
Another concern, if you apply such an interceptor to non-bean MessageChannel, so its bean name is also null.

Related

Kafka Streams - override default addSink implementation / custom producer

It is my first post to this here and I am not sure if this was covered here before, but here goes: I have a Kafka Streams application, using Processor API, following the topology below:
1. Consume data from an input topic (processor.addSource())
2. Inserts data into a DB (processor.addProcessor())
3. Produce its process status to an output topic (processor.addSink())
App works big time, however, for traceability purposes, I need to have in the logs the moment kstreams produced a message to the output topic, as well as its RecordMetaData (topic, partition, offset).
Example below:
KEY="MY_KEY" OUTPUT_TOPIC="MY-OUTPUT-TOPIC" PARTITION="1" OFFSET="1000" STATUS="SUCCESS"
I am not sure if there is a way to override the default kafka streams producer to add this logging or maybe creating my own producer to plug it on the addSink process. I partially achieved it by implementing my own ExceptionHandler (default.producer.exception.handler), but it only covers the exceptions.
Thanks in advance,
Guilherme
If you configure the streams application to use a ProducerInterceptor, then you should be able to get the information you need. Specifically, implementing the onAcknowledgement() will provide access to everything you listed above.
To configure interceptors in a streams application:
Properties props = new Properties();
// add this configuration in addition to your other streams configs
props.put(StreamsConfig.producerPrefix(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG), Collections.singletonList(MyProducerInterceptor.class));
You can provide more than one interceptor if desired, just add the class name and change the list implementation from a singleton to a regular List. Execution of the interceptors follows the order of the classes in the list.
EDIT: Just to be clear, you can override the provided Producer in Kafka Streams via the KafkaClientSupplier interface, but IMHO using an interceptor is the cleaner approach. But which direction to go is up to you. You pass in your KafkaClientSupplier in an overloaded Kafka Streams constructor.

If Header is itself not present and we want to validate and proceed in routeBuilder in Camel

I am facing an issue in camel coding as:
1). We have two layers of code , first one is consumer and another is producer.
2). Consumer is calling producer as producer has many microservices.
3). During call Producer is generating the Unique ID for transaction tracking.
4). We can call the producer directly and it will generate the resultset.
5). During the Producer call we have to add the Unique Log transaction ID in header from POSTMAN.
Now the question is if want to hit the producer directly and do not want to pass the log transaction ID, Is there any way that
my producer router understands that LOGTRANSACTION is not present in header and it will generate a header named as "LOGTRANSACTION"
and add the unique value ?
and if we hit the Consumer then the LOGTRANSACTION ID propagate as it is to the producer layer.
Presuming the header you are talking about is a Camel message header; you may add a new Processor in front of your existing route to inspect the incoming Message with getHeader("LOGTRANSACTION");,. If this header is not present, your new processor can do a setHeader("LOGTRANSACTION", newHeader); to attach it synthetically(somehow!). Bear in mind that If you do exchange.getIn().getHeader() all the inbound headers and body will be preserved but calls to getOut() will clear original IN message. If you want further (better) answers, please consider posting relevant parts of your route(s) as well.

How to test delivery in PublishSubscribeChannel?

I have a PublishSubscribeChannel in my application, which should deliver messages to different MessageHandlers inside the same JVM. Handlers are subscribed to the channel using #StreamListener annotation. Channel uses Executors so delivery is asynchronous.
Now, I want to test that senders and handlers agree on the specific object type which send through channel (the type of Message body). AFAIU I have two ways to test this:
Find all subscribers of the given channel and verify their
signature.
Send a message to a channel and verify that no handlers have thrown an exception.
I have no idea how to do (1). And I think I could do (2) by listening to errorChannel (there should be no messages there), but I don't quite understand how long should I wait for error messages.
Any suggestions?
For 1, you can use reflection to look at the collection of handlers in the channel's dispatcher; then use reflection again to look at the hander's Method.
However, your design is flawed, unless you don't mind losing messages; the incoming message will be ack'd as soon as you hand off to the executor; if the server then crashes, the message will be lost.
If you get rid of the executor, it would be simpler to add an interceptor to the channel, which will be notified of any exceptions in its afterSendCompletion() method (satisfying your 2).

why does spring cloud stream's #InboundChannelAdapter accept no parameters?

I'm trying to use spring cloud stream to send and receive messages on kafka. The examples for this use a simple example of using time stamps as the messages. I'm trying to go just one step further into a real world application when I ran into this blocker on the InboundChannelAdapter docs:
"A method annotated with #InboundChannelAdapter can't accept any parameters"
I was trying to use it like so:
#InboundChannelAdapter(value = ChannelManager.OUTPUT)
public EventCreated createCustomerEvent(String customerId, String thingId) {
return new EventCreated(customerId, thingId);
}
What usage am I missing? I imagine that when you want to create an event, you have some data that you want to use for that event, and so you would normally pass that data in via parameters. But "A method annotated with #InboundChannelAdapter can't accept any parameters". So how are you supposed to use this?
I understand that #InboundChannelAdapter comes from spring-integration, which spring-cloud-stream extends, and so spring-integration may have a different context in which this makes sense. But it seems un-intuitive to me (as does using an _INBOUND_ChannelAdapter for an output/producer/source)
Well, first of all the #InboundChannelAdapter is defined exactly in Spring Integration and Spring Cloud Stream doesn't extend it. That's false. Not sure where you have picked up that info...
This annotation builds something like SourcePollingChannelAdapter which provides a poller based on the scheduler and calls periodically a MessageSource.receive(). Since there is no any context and end-user can't effect that poller's behavior with his own arguments, the requirement for empty method parameters is obvious.
This #InboundChannelAdapter is a beginning of the flow and it is active. It does its logic on background without your events.
If you would like to call some method with parameters and trigger with that some flow, you should consider to use #MessagingGateway: http://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#messaging-gateway-annotation
How are you expecting to call that method? I think there was a miscommunication with your statement "stream extends integration" and Artem probably understood that we extend #InboundChannelAdatper
So, if you are actively calling this method, as it appears since you do have arguments that are passed to it, why not just using your source channel to send the data?
Usually sources do not require arguments as they are either push like the twitter stream that taps on twitter, listen for events and pushes them to the source channel, or they are polled, in which case, they are invoked on an interval defined via a poller.
As Artem pointed, if your intention is to call this method from your business flow, and deal with the return while triggering a message flow, then check his link from the docs.

Strategy for passing same payload between messages when optional outbound gateways fail

I have a workflow whose message payload (MasterObj) is being enriched several times. During the 2nd enrichment () an UnknownHostException was thrown by an outbound gateway. My error channel on the enricher is called but the message the error-channel receives is an exception, and the failed msg in that exception is no longer my MasterObj (original payload) but it is now the object gotten from request-payload-expression on the enricher.
The enricher calls an outbound-gateway and business-wise this is optional. I just want to continue my workflow with the payload that I've been enriching. The docs say that the error-channel on the enricher can be used to provide an alternate object (to what the enricher's request-channel would return) but even when I return an object from the enricher's error-channel, it still takes me to the workflow's overall error channel.
How do I trap errors from enricher's + outbound-gateways, and continue processing my workflow with the same payload I've been working on?
Is trying to maintain a single payload object for the entire workflow the right strategy? I need to be able to access it whenever I need.
I was thinking of using a bean scoped to the session where I store the payload but that seems to defeat the purpose of SI, no?
Thanks.
Well, if you worry about your MasterObj in the error-channel flow, don't use that request-payload-expression and let the original payload go to the enricher's sub-flow.
You always can use in that flow a simple <transformer expression="">.
On the other hand, you're right: it isn't good strategy to support single object through the flow. You carry messages via channel and it isn't good to be tied on each step. The Spring Integration purpose is to be able to switch from different MessageChannel types at any time with small effort for their producers and consumers. Also you can switch to the distributed mode when consumers and producers are on different machines.
If you still need to enrich the same object several times, consider to write some custom Java code. You can use a #MessagingGateway on the matter to still have a Spring Integration gain.
And right, scope is not good for integration flow, because you can simply switch there to a different channel type and lose a ThreadLocal context.

Resources