spring Functional binding names multiple inputs to one output in spring cloud - spring

How to set multiple input channels to output to the same destanation
I have the following configuration:
spring:
cloud:
stream:
function:
definition: beer;scotch
bindings:
notification:
destination: labron
beer-in-0:
destination: wheat
scotch-in-0:
destination: wiskey
I want to create Function binding so that each input channel will output it's message to notification binding
So in the corresponding code:
#Service
class Notifications {
#Bean
fun beer(): Function<String, String> = Function {
// wanted oout channel
// beer -> notification
it.toUpperCase()
}
#Bean
fun scotch(): Function<String, String> = Function {
// wanted oout channel
// scotch -> notification
it.toUpperCase()
}
}
I want to use Spring Cloud Stream 3.0 functional binding names.
beer -> notification
scotch -> notification
What is the best way to active that ?

I'd suggest going with something like so (based on this):
#Bean
public Function<Tuple2<Flux<String>, Flux<String>>, Flux<String>> beerAndScotch() {
return tuple -> {
Flux<String> beerStream = tuple.getT1().map(item -> item.toUpperCase());
Flux<String> scotchStream = tuple.getT2().map(item -> item.toUpperCase());
return Flux.merge(beerStream, scotchStream);
};
}
and so your definition should look something like:
spring:
cloud:
stream:
function:
definition: beerAndScotch
bindings:
notification:
destination: labron
beerAndScotch-in-0:
// ...
beerAndScotch-in-1:
// ...
beerAndScotch-out-0:
destination: labron
This way, both inputs to beer and inputs to scotch get sent to labron

Related

Multiple spring Kafka stream processors in one spring boot application

I have a requirement to write 2 Spring Kafka Stream processors in one spring-boot application. Both stream processors need to consume messages from a single topic and produce the output to another single topic.
The first processor does not process all messages coming to the input topic, it processes some selected messages (group by message key) and uses windowedBy option instead.
The second processor, however, needs to process all messages which do not fall into the first processor groupBy logic.
I started with the first processor which works fine. The problem came when I try to introduce the second processor.
KafkaProcessor.java:
#Bean
public Function<KStream<String, Input>, KStream<String, Output>> processorOne() {
final AtomicReference<KeyValue<String, Output>> result = new AtomicReference<>(null);
return kStream -> kStream
.filter((key, value) -> value != null)
.filter(this::isValidKey)
.peek((key, value) -> print(value))
.groupBy((key, value) -> key)
.windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(5)))
.count(Materialized.as("my-state-store"))
.filter((windowedKey, count) -> count > 0)
.toStream()
.filter((messageKey, messageValue) -> {
log.info("Key: {} count: {}", messageKey.key(), messageValue);
Optional<Output> outputResult = service.process(messageKey.key());
if (outputResult.isPresent()) {
Output output = outputResult.get();
print(output);
result.set(KeyValue.pair(messageKey.key(), output));
return true;
}
return false;
})
.map((messageKey, messageValue) -> result.get());
}
Then I added, a second processor to the same class.
#Bean
public Function<KStream<String, Input>, KStream<String, Output>> processorTwo() {
final AtomicReference<KeyValue<String, Output>> result = new AtomicReference<>(null);
return kStream -> kStream
.filter((key, value) -> value != null)
.filter((key, value) -> isValidEvent(value))
.peek((key, value) -> print(value))
.filter((messageKey, messageValue) -> {
Optional<Output> outputResult = service.process(messageValue);
if (outputResult.isPresent()) {
Output output = outputResult.get();
print(output);
result.set(KeyValue.pair(String.format("%s-%s", output.getNameOne(), output.getNametwo()), output));
return true;
}
return false;
})
.map((messageKey, messageValue) -> result.get());
}
application.yaml
spring:
application:
name: my-common-processor
cloud:
stream:
function:
definition: processorOne; processorTwo
bindings:
processorOne-in-0:
destination: input-topic
group: ${spring.application.name}-processorOne
processorOne-out-0:
destination: output-topic
group: ${spring.application.name}-processorOne
processorTwo-in-0:
destination: input-topic
group: ${spring.application.name}-processorTwo
processorTwo-out-0:
destination: output-topic
group: ${spring.application.name}-processorTwo
kafka:
binder:
brokers: 127.0.0.1:9092
auto-create-topics: false
auto-add-partitions: false
streams:
binder:
configuration:
spring.json.use.type.headers: false
spring.json.trusted.packages: '*'
max.poll.records: 10
max.block.ms: 5000
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
deserialization-exception-handler: sendtodlq
auto-create-topics: false
auto-add-partitions: false
functions:
processorOne:
application-id: ${spring.application.name}-processorOne
processorTwo:
application-id: ${spring.application.name}-processorTwo
bindings:
processorOne-in-0:
consumer:
dlqName: error-topic
processorTwo-in-0:
cosumer:
dlqName: error-topic
kafka:
bootstrap-servers: 127.0.0.1:9092 // for kafka admin
Questions:
Is this the correct way to do this?
With this setup, the second processor never processes any messages even the messages which should have been processed by it.
If the setup is correct, how do I add/configure a deserialization-exception-handler for each processor?
This is where low level processor API implementation comes into picture. Define your project in this way.
Write a filtering processor that extends an abstract processor to filter the event from input topic(Hope your event will have a field which describes EventType1 or EventType2). Use the context forwarder inside the override process method. I would call this as EventFilteringProcessor.
if (EventType1)
context.forward(key, value, To.child(EventType1));
if (EventType2)
context.forward(key, value, To.child(EventType2));
Write two separate processor instances i.e. two separate classes that would extends AbstractProcessor. I would call these classes as EventType1Processor and EventType2Processor.
Describe your stream processor topology in the following way, in the main Spring boot application that implements the ProcessController.
Topology topology = new Topology();
topology.addSource("Source", "YourInputTopic)
.addProcessor("EventFilterClass", () -> EventFilteringProcessor.class, "Source")
.addProcessor(EventType1, () -> EventType1Processor.class, "EventFilterClass")
.addProcessor(EventType2, () -> EventType2Processor.class, "EventFilterClass");
final KafkaStreams streams = new KafkaStreams(topology, streamConfig);
streams.start();
You can have your respective business logic in the respective override process methods of EventType1Processor and EventType2Processor.
Hope this helps.

Consume Multiple Consumer with different Topics & Avro in Spring cloud stream framework

I am able to figure it out to consume 2 different Avros with different Topics. but i have 3 Topics with 3 different Avro's. how do we do in Spring Cloud kafka stream.
working code with 2 topics:
when I add consumerProcess-in-2, BiConsumer wont identify the 3 avro or 3 topic. which makes sense since this is BiConsumer. Any suggestion would be helpful.
Thank you in advance.
cloud:
stream:
function:
definition: consumerProcess
bindings:
consumerProcess-in-0:
content-type: application/*+avro
destination: Topic-1
consumerProcess-in-1:
content-type: application/*+avro
destination: Topic-2
public BiConsumer<KStream<String, Avro1>, KStream<String, Avro2>> consumerProcess() {
return (avro1, avro2) -> {
avro1.foreach(
(key, value) -> {
log.info("Avro1 ({}))",value);
});
avro2.foreach(
(key, value) -> {
log.info("avro2 {})", value);
});
};

Nestjs kafka implementation

I've read nestjs microservice and kafka documentation but I couldn't figure out some of it. I'll be so thankful if you can help me out.
So as the docs says I have to create a microService in main.ts file as follows:
const app = await NestFactory.createMicroservice<MicroserviceOptions>(AppModule, {
transport: Transport.KAFKA,
options: {
client: {
brokers: ['localhost:9092'],
}
}
});
await app.listen(() => console.log('app started'));
Then there is a kafkaModule file like this:
#Module({
imports: [
ClientsModule.register([
{
name: 'HERO_SERVICE',
transport: Transport.KAFKA,
options: {
client: {
clientId: 'hero',
brokers: ['localhost:9092'],
},
consumer: {
groupId: 'hero-consumer'
}
}
},
]),
]
})
export class KafkaModule implements OnModuleInit {
constructor(#Inject('HERO_SERVICE') private readonly clientService: KafkaClient)
async onModuleInit() {
await this.clientService.connect();
}
}
The first thing I can't figure is what is the use of the first parameter of createMicroservice ? (I passed AppModule and KafkaModule and both worked correctly. knowing that kafkaModule is imported at appModule)
The other thing is that from what I understood, the microservice part and the configuration in the main.ts file is used to subscribe on the topics that is used in MessagePattern or EventPattern decorators, and the kafkaClient described in the kafkaModule is used to send messages to different topics.
the problem here is if what I said earlier is true, then why clientModule uses a default groupId if not specified to work as consumer. strange thing is I couldn't find a solution to get any message from any topic using clientModule.
what I'm doing right now is to use different group ids in each file so they wont have any conflicts.
The first parameter of createMicroservice, it will help to guide how the consumer will connect to Kafka when you want to consume a message from a specific topic.
Example: we want to get message from topic: test01
How we declare?
import {Controller} from '#nestjs/common'
import {MessagePattern, Payload} from '#nestjs/microservices'
#Controller('sync')
export class SyncController {
#MessagePattern('test01')
handleTopicTest01(#Payload() message: Sync): any {
// Handle your message here
}
}
The second block is used as producer that is not consumer. When application want to send message to a specific topic, the clientModel will support this.
#Get()
sayHello() {
return this.clientModule.send('say.hello', 'hello world')
}

How do you use WebFlux to parse an event stream that does not conform to Server Sent Events?

I am trying to use WebClient to deal with the Docker /events endpoint. However, it does not conform to the text/eventstream contract in that each message is separated by 2 LFs. It just sends it as one JSON document followed by another.
It also sets the MIME type to application/json rather than text/eventstream.
What I am thinking of but not implemented yet is to create a node proxy that will add the required line feed and put that in between but I was hoping to avoid that kind of workaround.
Instead of trying to handle a ServerSentEvent, just receive it as a String. Then attempt to parse it as JSON (ignoring the ones that fail which I am presuming may happen but I haven't hit it myself)
#PostConstruct
public void setUpStreamer() {
final Map<String, List<String>> filters = new HashMap<>();
filters.put("type", Collections.singletonList("service"));
WebClient.create(daemonEndpoint)
.get()
.uri("/events?filters={filters}",
mapper.writeValueAsString(filters))
.retrieve()
.bodyToFlux(String.class)
.flatMap(Mono::justOrEmpty)
.map(s -> {
try {
return mapper.readValue(s, Map.class);
} catch (IOException e) {
log.warn("unable to parse {} as JSON", s);
return null;
}
})
.flatMap(Mono::justOrEmpty)
.subscribe(
event -> {
log.trace("event={}", event);
refreshRoutes();
},
throwable -> log.error("Error on event stream: {}", throwable.getMessage(), throwable),
() -> log.warn("event stream completed")
);
}

Log Spring webflux types - Mono and Flux

I am new to spring 5.
1) How I can log the method params which are Mono and flux type without blocking them?
2) How to map Models at API layer to Business object at service layer using Map-struct?
Edit 1:
I have this imperative code which I am trying to convert into a reactive code. It has compilation issue at the moment due to introduction of Mono in the argument.
public Mono<UserContactsBO> getUserContacts(Mono<LoginBO> loginBOMono)
{
LOGGER.info("Get contact info for login: {}, and client: {}", loginId, clientId);
if (StringUtils.isAllEmpty(loginId, clientId)) {
LOGGER.error(ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
throw new ServiceValidationException(
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getErrorCode(),
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
}
if (!loginId.equals(clientId)) {
if (authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId))) {
loginId = clientId;
} else {
LOGGER.error(ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
throw new AuthorizationException(
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getErrorCode(),
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
}
}
UserContactDetailEntity userContactDetail = userContactRepository.findByLoginId(loginId);
LOGGER.debug("contact info returned from DB{}", userContactDetail);
//mapstruct to map entity to BO
return contactMapper.userEntityToUserContactBo(userContactDetail);
}
You can try like this.
If you want to add logs you may use .map and add logs there. if filters are not passed it will return empty you can get it with swichifempty
loginBOMono.filter(loginBO -> !StringUtils.isAllEmpty(loginId, clientId))
.filter(loginBOMono1 -> loginBOMono.loginId.equals(clientId))
.filter(loginBOMono1 -> authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId)))
.map(loginBOMono1 -> {
loginBOMono1.loginId = clientId;
return loginBOMono1;
})
.flatMap(o -> {
return userContactRepository.findByLoginId(o.loginId);
})

Resources