I need two outputs from transformer. How to process multiple outputs from transformer with Kafka DSL? I'd like to get two KStream with different types after transform().
someMethod(KStream<String, Transaction> transaction) {
transaction
.transform(()-> new MyTransformer(...))
// what can I do here?
}
public class MyTransformer implements Transformer<...> {
public KeyValue<String, Aggregator> transform(String key, Integer value) {
if (...) {
context.forward(key1, new A(...), To.child("first_child"))
} else {
context.forward(key2, new B(...), To.child("second_child"))
}
}
}
Related
I am working on project where I need to validate consumer group is created on topic or not. Is there any way in boldSpring Kafkabold to validate it
Currently, I haven't seen describeConsumerGroups supported in Spring-Kafka KafkaAdmin. So, you may need to create a Kafka AdminClient and call the method by yourself.
E.g: Here, I took advantage of the auto-configuration property class KafkaProperties and autowired it to the service.
#Service
public class KafkaBrokerService implements BrokerService {
private Map<String, Object> configs;
public KafkaBrokerService(KafkaProperties kafkaProperties) { // Autowired
this.configs = kafkaProperties.buildAdminProperties();
}
private AdminClient createAdmin() {
Map<String, Object> configs2 = new HashMap<>(this.configs);
return AdminClient.create(configs2);
}
public SomeDto consumerGroupDescription(String groupId) {
try (AdminClient adminClient = createAdmin()) {
// ConsumerGroup's members
ConsumerGroupDescription consumerGroupDescription = adminClient.describeConsumerGroups(Collections.singletonList(groupId))
.describedGroups().get(groupId).get();
// ConsumerGroup's partitions and the committed offset in each partition
Map<TopicPartition, OffsetAndMetadata> offsets = adminClient.listConsumerGroupOffsets(groupId).partitionsToOffsetAndMetadata().get();
// When you get the information, you can validate it here.
...
} catch (ExecutionException | InterruptedException e) {
//
}
}
}
ItemReader is reading data from DB2 and gave java object ClaimDto. Now the ClaimProcessor takes in the object of ClaimDto and return CompositeClaimRecord object which comprises of claimRecord1 and claimRecord2 which to be sent to two different Kafka topics. How to write claimRecord1 and claimRecord2 to topic1 and topic2 respectively.
Just write a custom ItemWriter that does exactly that.
public class YourItemWriter implements ItemWriter<CompositeClaimRecord>` {
private final ItemWriter<Record1> writer1;
private final ItemWriter<Record2> writer2;
public YourItemWriter(ItemWriter<Record1> writer1, ItemWriter<Record2> writer2>) {
this.writer1=writer1;
this.writer2=writer2;
}
public void write(List<CompositeClaimRecord> items) throws Exception {
for (CompositeClaimRecord record : items) {
writer1.write(Collections.singletonList(record.claimRecord1));
writer2.write(Collections.singletonList(record.claimRecord2));
}
}
}
Or instead of writing 1 record at a time convert the single list into 2 lists and pass that along. But error handling might be a bit of a challenge that way. \
public class YourItemWriter implements ItemWriter<CompositeClaimRecord>` {
private final ItemWriter<Record1> writer1;
private final ItemWriter<Record2> writer2;
public YourItemWriter(ItemWriter<Record1> writer1, ItemWriter<Record2> writer2>) {
this.writer1=writer1;
this.writer2=writer2;
}
public void write(List<CompositeClaimRecord> items) throws Exception {
List<ClaimRecord1> record1List = items.stream().map(it -> it.claimRecord1).collect(Collectors.toList());
List<ClaimRecord2> record2List = items.stream().map(it -> it.claimRecord2).collect(Collectors.toList());
writer1.write(record1List);
writer2.write(record2List);
}
}
You can use a ClassifierCompositeItemWriter with two KafkaItemWriters as delegates (one for each topic).
The Classifier would classify items according to their type (claimRecord1 or claimRecord2) and route them to the corresponding kafka item writer (topic1 or topic2).
I have 2 JMS queues and my application subscribes to both of them with Jms.messageDrivenChannelAdapter(...) component.
First queue receives messages of type Paid. Second queue receives messages of type Reversal.
Business scenario defines correlation between messages of type Paid and type Reversal.
Reversal should wait for Paid in order to be processed.
How can I achieve such "wait" pattern with Spring Integration?
Is it possible to correlate messages between 2 JMS queues?
See the documentation about the Aggregator.
The aggregator correlates messages using some correlation strategy and releases the group based on some release strategy.
The Aggregator combines a group of related messages, by correlating and storing them, until the group is deemed to be complete. At that point, the aggregator creates a single message by processing the whole group and sends the aggregated message as output.
The output payload is a list of the grouped message payloads by default, but you can provide a custom output processor.
EDIT
#SpringBootApplication
public class So55299268Application {
public static void main(String[] args) {
SpringApplication.run(So55299268Application.class, args);
}
#Bean
public IntegrationFlow in1(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory)
.destination("queue1"))
.channel("aggregator.input")
.get();
}
#Bean
public IntegrationFlow in2(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory)
.destination("queue2"))
.channel("aggregator.input")
.get();
}
#Bean
public IntegrationFlow aggregator() {
return f -> f
.aggregate(a -> a
.correlationExpression("headers.jms_correlationId")
.releaseExpression("size() == 2")
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.groupTimeout(5_000L)
.discardChannel("discards.input"))
.handle(System.out::println);
}
#Bean
public IntegrationFlow discards() {
return f -> f.handle((p, h) -> {
System.out.println("Aggregation timed out for " + p);
return null;
});
}
#Bean
public ApplicationRunner runner(JmsTemplate template) {
return args -> {
send(template, "one", "two");
send(template, "three", null);
};
}
private void send(JmsTemplate template, String one, String two) {
template.convertAndSend("queue1", one, m -> {
m.setJMSCorrelationID(one);
return m;
});
if (two != null) {
template.convertAndSend("queue2", two, m -> {
m.setJMSCorrelationID(one);
return m;
});
}
}
}
and
GenericMessage [payload=[two, one], headers={jms_redelivered=false, jms_destination=queue://queue1, jms_correlationId=one, id=784535fe-8861-1b22-2cfa-cc2e67763674, priority=4, jms_timestamp=1553290921442, jms_messageId=ID:Gollum2.local-55540-1553290921241-4:1:3:1:1, timestamp=1553290921457}]
2019-03-22 17:42:06.460 INFO 55396 --- [ask-scheduler-1] o.s.i.a.AggregatingMessageHandler : Expiring MessageGroup with correlationKey[three]
Aggregation timed out for three
I am using Spring cloud stream (kafka) to exchange messages between producer and consumer microservices.
It exchanges data with native java serialization. As per Spring cloud documentation, It supports JSON,AVRO serialization.
Is any one tried protobuf serialization (message converter) in spring cloud stream
---------------- Later Added
I wrote this MessageConverter
public class ProtobufMessageConverter<T extends AbstractMessage > extends AbstractMessageConverter
{
private Parser<T> parser;
public ProtobufMessageConverter(Parser<T> parser)
{
super(new MimeType("application", "protobuf"));
this.parser = parser;
}
#Override
protected boolean supports(Class<?> clazz)
{
if (clazz != null)
{
return EquipmentProto.Equipment.class.isAssignableFrom(clazz);
}
return true;
}
#Override
public Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint)
{
if (!(message.getPayload() instanceof byte[]))
{
return null;
}
try
{
// return EquipmentProto.Equipment.parseFrom((byte[]) message.getPayload());
return parser.parseFrom((byte[]) message.getPayload());
}
catch (Exception e)
{
this.logger.error(e.getMessage(), e);
}
return null;
}
#Override
protected Object convertToInternal(Object payload, MessageHeaders headers, Object conversionHint)
{
return ((AbstractMessage) payload).toByteArray();
}
}
It's really not a question of trying but rather just doing it, since converters are a natural extension mechanism (inherited fro spring-integration) in spring-cloud-stream that exists specifically to address these concerns. So yes, you can add your own custom converter.
Also, keep in mind that with Kafka there is also a concept of native serde, so you need to make sure that the two do not create some conflict.
Using spring-kafka I want to publish the result (?) of the KTable as a new event into another topic, but the stream for topicB never receives events.
The same seems to happen, if I call mapValues() on the KTable directly. What am I missing to work with the content of KTable?
#Bean
KTable reportStream(StreamsBuilder builder, Engine engine) {
def stream = builder.stream("topicA")
.groupBy({ key, word -> word })
.windowedBy(SessionWindows.with(TimeUnit.SECONDS.toMillis(1)))
.aggregate(
new Initializer<Long>() {
#Override
Long apply() {
0
}
},
new Aggregator<String, String, Long>() {
#Override
Long apply(String key, String value, Long aggregate) {
def l = 1 + aggregate
return l
}
},
new Merger() {
#Override
Long apply(Object aggKey, Object aggOne, Object aggTwo) {
return aggOne + aggTwo
}
},
Materialized.with(Serdes.String(), Serdes.Long()))
stream.toStream().to("topicB")
stream
}
#Bean
KStream classificationStream(StreamsBuilder builder, Engine engine) {
builder.stream("topicB").mapValues({
println "topicB"
println it
})
}