OpenTelemetry trace not propagated to Reactive Messaging processor - quarkus

I'm using Quarkus to build a simple Kafka message processor. The processor consumes messages from a Kafka topic and produces messages to another topic; it's pretty straightforward.
I have quarkus-opentelemetry-exporter-otlp enabled in my application, and I was hoping to see traces generated by smallrye-reactive-messaging-kafka and my own traces correctly nested. This doesn't seem to be working, though.
My processor looks like this:
#ApplicationScoped
class MessageProcessor #Inject constructor(private val service: MyService) {
#Blocking
#Incoming("requests")
#Outgoing("events")
#WithSpan("email.process")
fun process(incoming: MessageEnvelope): MessageEnvelope {
val span = Span.current()
.setAttribute(Keys.TYPE, incoming.type)
.setAttribute(Keys.TO, incoming.to)
return service.send(request)
}
}
In Jaeger, I can see the spans produced by Smallrye Reactive Messaging and my own email.process span, but they're not nested as I expected.
What do I need to do to propagate the trace context correctly?

Related

Using TestChannelBinderConfiguration but two handlers get registered

I have a Spring Cloud Stream application that processes (credit card) events -- some of which are processed synchronously and some asynchronously. I came up with roughly the following in Kotlin:
spring-cloud-starter-stream-rabbit = 3.1.0
#Service
class CardEventProcessor(
private val streamBridge: StreamBridge,
) : Consumer<AsyncCardEvent> {
fun process(cardEvent: SyncCardEvent): Result { return businessLogic() }
fun processAsynchronously(cardEvent: AsyncCardEvent) {
streamBridge.send("cardEventProcessor-out-0", cardEvent)
}
override fun accept(cardEvent: AsyncCardEvent) { businessLogic() }
}
and have configured it like so:
rabbitmq: ...
cloud:
stream:
bindings:
cardEventProcessor-in-0:
destination: cardevents
group: CardEventProcessor
cardEventProcessor-out-0:
destination: cardevents
It all seems to work fine, except in integration tests the processing fails after the second async card event. I was able to debug / reduce the issue down to two handlers being registered in UnicastingDispatcher, which has a round robin strategy: one for TestChannelBinder and another for OutputDestination$lamdba.
This is what my integration test class looks like:
#SpringBootTest
#Transactional
#AutoConfigureMockMvc
#AutoConfigureEmbeddedDatabase
#Import(TestChannelBinderConfiguration::class)
class IntegrationTests {
#Test
fun `Use case one`() {
sendFirstAsyncRequest() // processed correctly in CardEventProcessor.accept()
sendSecondAsyncRequest() // message never arrives in CardEventProcessor.accept()
}
}
I was following the Testing section in Spring cloud stream docs and can't figure out what I'm missing to get this working. The example there is a Function<> not a Consumer<> and I produce within the same #Service class as the consumer (because the queue is just an implementation detail to solve async, not the typical use case of queues between micro-services) but as far as I understand that should work, and does in fact work when not running as integration test.
I saw Disable Spring Cloud Stream Rabbit for tests but didn't want to depend on the deprecated spring-cloud-stream-test-support and the other two suggestions didn't work either. Any ideas?

spring kafka embedded broker - My actual listener is never trigerred

I'm using Kafka embedded broker with spring boot and junit 5.I have been able to wire up successfully and see that the embedded broker is running.
In my setup method I pump in a few messages to the queue that my actual code listens on
#BeforeAll
public void setup() {
// code to play down some messages to topic X
}
My consumer/listener is never trigerred despite there being no errors encountered in the setup method
My Consumer is setup like
class Consumer() {
#KafkaListener(topics="X",
groupId ="...",
containerFactory="my-container-factory"
)
public void consume(ConsumerRecord<String,byte[] rec) {
//logic to handle
logger.info("Print rec : "+rec)
}
}
else where I've set up my ListenerContainerFactory with a name like
#Bean(name="my-container-factory")
public KafkaContainerListenerFactory<String,byte[]> factory() {
}
What could be wrong with this?My assertions in the test case fail and additionally I don't see my log statements that should be printed if my consume method were ever called.
I've a feeling,that auto configuration due to #SpringBootTest and #EmbeddedKafka is setting up some other listener container factory and so maybe my #KafkaListener annotation is wrong.
I know,its a bit vague but could you please tell me what/where to look at?If I run as a #SpringBootApplication my Consumer is pulling in messages from the actual queue.So no problems with my actual app.Its the test that's not executing as per expectation.
Please help.
Edit 1:
I have spring.kafka.consumer.auto-offset-reset=earliest set in my yml file.

Springboot cloud Stream with Kafka

I'm trying to setup a project with Springboot cloud Stream with Kafka. I managed to build a simple example, where a listener gets messages from a topic and after processed it, it sends the output to another topic.
My listener and channels are configured like this:
#Component
public class FileEventListener {
private FileEventProcessorService fileEventProcessorService;
#Autowired
public FileEventListener(FileEventProcessorService fileEventProcessorService) {
this.fileEventProcessorService = fileEventProcessorService;
}
#StreamListener(target = FileEventStreams.INPUT)
public void handleLine(#Payload(required = false) String jsonData) {
this.fileEventProcessorService.process(jsonData);
}
}
public interface FileEventStreams {
String INPUT = "file_events";
String OUTPUT = "raw_lines";
#Input(INPUT)
SubscribableChannel inboundFileEventChannel();
#Output(OUTPUT)
MessageChannel outboundRawLinesChannel();
}
The problem with this example is that when the service starts, it doesn't check for messages that already exist in the topic, it only process those messages that are sent after it started. I'm very new to Springboot stream and kafka, but for what I've read, this behavior may correspond to the fact that I'm using a SubscribableChannel. I tried to use a QueueChannel for example, to see how it works but I found the following exception:
Error creating bean with name ... nested exception is java.lang.IllegalStateException: No factory found for binding target type: org.springframework.integration.channel.QueueChannel among registered factories: channelFactory,messageSourceFactory
So, my questions are:
If I want to process all messages that exists in the topic once the application starts (and also messages are processed by only one consumer), I'm on the right path?
Even if QueueChannel is not the right choice for achieve the behavior explained in 1.) What do I have to add to my project to be able to use this type of channel?
Thanks!
Add spring.cloud.stream.bindings.file_events.group=foo
anonymous groups consume from the end of the topic only, bindings with a group consume from the beginning, by default.
You cannot use a PollableChannel for a binding, it must be a SubscribableChannel.

Kinesis as producer in Spring Boot Reactive Stream API

I'm trying to build a small Spring Boot Reactive API. The API should let the users subscribe to some data, returned as SSE.
The data is located on a Kinesis Topic.
Creating the Reactive API, and the StreamListener to Kinesis is fairly easy - but can I combine these, so the Kinesis Topic are used as a producer for the event stream used by my data service.
The code looks more or less like this
//Kinesis binding, with listenerMode: rawRecords
#EnableBinding(Sink.class)
public class KinesisStreamListener {
#StreamListener(value = Sink.INPUT)
public void logger(List<Record> payload) throws Exception {
}
}
#RestController
#RequestMapping("/data")
public class DataResource {
#Autowired
DataService service;
#GetMapping(produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE})
public Flux<EventObject> getData() {
return service.getData();
}
}
#Component
public class DataService {
Flux<EventObject> getData() {
Flux<Long> interval = Flux.interval(Duration.ofMillis(1000));
Flux<EventObject> dataFlux = Flux.fromStream(Stream.generate(() -> ???
));
return dataFlux.zip(interval, dataFlux).map(Tuple2::getT2);
}
}
Here is a sample how I would do that: https://github.com/artembilan/sandbox/tree/master/cloud-stream-kinesis-to-webflux.
Once we agree about details and some improvements it can go to the official Spring Cloud Stream Samples repository: https://github.com/spring-cloud/spring-cloud-stream-samples
The main idea is to reuse the same Flux provided by the #StreamListener via Spring Cloud Stream Reactive Support. This is is already a FluxPublish, so any new SSE connections will work as a plain Reactive subscribers.
There are a couple tricks to count with:
For the listenerMode: rawRecords, we also need to configure a contentType: application/octet-stream to avoid any conversion attempts when Binder sends a message to the Sink.INPUT channel.
Since listenerMode: rawRecords returns a List<Record> our Flux in the #StreamListener method should expect exactly this type, but not a plain Record.
Both concerns are considered as a Framework improvements.
So, let us now how it looks and works for you.

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

Resources