Can we use Logstash in SpringBoot to sync data from RDBMS - oracle

We can sync data from Oracle/any db to elastic using Logstash-Jdbc plugin. But, I can't find any way to manipulate the data which is from DB in this jdbc plugin. I want to use the Logstash/any plugin in my spring boot application to do the same by which I want to manipulate the data & column names before saving into elastic.

There are many Logstash input plugins and you can do basic stream processing with grok filter inside of Logstash how ever I offer to use Kafka input plugin to have do your stream processing the send you data to Logstash.
Create consumer with your Kafka broker and publish your documents with publisher class inside of your spring project then use Logstash config input to ingest your data to your index more over in this road map you have strong consumer-publisher pipeline with aid of Apache Kafka.
Find an example as follow,
<!--Kafka-->
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.5.8.RELEASE</version>
</dependency>
After manipulating you documents and data, Create a publisher class to publish this documents
public class DriverProducer {
#Autowired
KafkaTemplate<Integer, String > kafkaTemplate;
#Autowired
ObjectMapper objectMapper;
public void messenger(Object convey) throws JsonProcessingException {
String message=objectMapper.writeValueAsString(convey);
ListenableFuture<SendResult<Integer,String>> listenableFuture=kafkaTemplate.sendDefault(null, message);
listenableFuture.addCallback(new ListenableFutureCallback<SendResult<Integer, String>>() {
#Override
public void onFailure(Throwable throwable) {
failHandler(null, message, throwable);
}
#Override
public void onSuccess(SendResult<Integer, String> result) {
successHandler(result);
}
});
}
private void failHandler(Integer key, String message, Throwable throwable){
//log.error("Unable to send the message for following Error :"+throwable.getMessage());
try {
throw throwable;
}
catch (Throwable anotherThrowable){
//log.error("**Supreme Error on throwing the throwable**"+anotherThrowable.getMessage());
}
}
private void successHandler (SendResult<Integer,String> result){
//log.info("Message sent successfully :"+ result);
}
#AllArgsConstructor
#NoArgsConstructor
#Getter
#Setter
public static class Convey{
private SagaSequence sequence;
private Integer key;
private Date date;
}
And your Logstash config file may look like this
input {
kafka{
group_id => "35834"
topics => ["Second-Topic"]
bootstrap_servers => "localhost:9092"
codec => json
}
}
filter {
}
output {
file {
path => "/SOMEPATH"
}
elasticsearch {
hosts => ["localhost:9200"]
document_type => "_doc"
index => "logger"
}
stdout { codec => rubydebug
}
}

Related

How to specify multiple topics in separate config properties for one Kafka listener?

I would like to create a spring boot application that reads from several Kafka topics. I realise I can create a comma separated list of topics on my appliation.properties, however I would like the topic names to be listed separately for readability and so I can use each topic name to work out how to process the message.
I've found the following questions, but they all have the topics listed as a comma separated array:
Consume multiple topics in one listener in spring boot kafka
Using multiple topic names with KafkaListener annotation
Enabling #KafkaListener to take in variable topic names from application.yml file
Pass array list of topic names to #KafkaListener
The closest I've come is with the following:
application.properties
kafka.topic1=topic1
kafka.topic2=topic2
KafkaConsumer
#KafkaListener(topics = "#{'${kafka.topic1}'},#{'${kafka.topic2}'}")
public void receive(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(required = false, name= KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
#Payload(required = false) String payload) throws IOException {
}
This gives the error:
Caused by: org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: [topic1,topic2]
I realise I need it to be {"topic1", "topic2} but I can't work out how.
Having the annotation #KafkaListener(topics = "#{'${kafka.topic1}'}") correctly subscribes to the first topic. And if I change it to #KafkaListener(topics = "#{'${kafka.topic2}'}") I can correctly subscribe to the second topic.
It's just the creating of the array of topics in the annotation that I can't fathom.
Any help would be wonderful
#KafkaListener(id = "so71497475", topics = { "${kafka.topic1}", "${kafka.topic2}" })
EDIT
And this is a more sophisticated technique which would allow you to add more topics without changing any code:
#SpringBootApplication
#EnableConfigurationProperties
public class So71497475Application {
public static void main(String[] args) {
SpringApplication.run(So71497475Application.class, args);
}
#KafkaListener(id = "so71497475", topics = "#{#myProps.kafkaTopics}")
void listen(String in) {
System.out.println(in);
}
#Bean // This will add the topics to the broker if not present
KafkaAdmin.NewTopics topics(MyProps props) {
return new KafkaAdmin.NewTopics(props.getTopics().stream()
.map(t -> TopicBuilder.name(t).partitions(1).replicas(1).build())
.toArray(size -> new NewTopic[size]));
}
}
#ConfigurationProperties("my.kafka")
#Component
class MyProps {
private List<String> topics = new ArrayList<>();
public List<String> getTopics() {
return this.topics;
}
public void setTopics(List<String> topics) {
this.topics = topics;
}
public String[] getKafkaTopics() {
return this.topics.toArray(new String[0]);
}
}
my.kafka.topics[0]=topic1
my.kafka.topics[1]=topic2
my.kafka.topics[2]=topic3
so71497475: partitions assigned: [topic1-0, topic2-0, topic3-0]
If you have your topics configured as comma seperated like:
kafka.topics = topic1,topic2
In this case you can simply use:
#KafkaListener(topics = "#{'${kafka.topics}'.split(',')}")
void listen(){}

Is there a way to log all incoming kafka requests in spring?

I'm using simple kafka handler:
#KafkaListener(
topics = Topic.NAME,
clientIdPrefix = KafkaHandler.LISTENER_ID)
public class KafkaHandler {
public static final String LISTENER_ID = "kafka_listener";
#KafkaHandler(isDefault = true)
#Description(value = "Event received")
public void onEvent(#Payload Payload payload) {
...
}
However, my object (Payload in the example) is not mapped properly (some fields are null).
Is there a way to log all incoming kafka KV pairs somewhere in spring-kafka app?
You can process the entire Kafka record instead only the payload.
#KafkaListener(topics = "any-topic")
void listener(ConsumerRecord<String, String> record) {
log.info("{}",record.key());
log.info("{}",record.value());
log.info("{}",record.partition());
log.info("{}",record.topic());
log.info("{}",record.offset());
}
Replace the String for your desired key, value format, and define the deserializer class in your app properties.
spring.kafka.consumer.key-deserializer=YourKeyDeserializer.class
spring.kafka.consumer.value-deserializer=YourValueDeserializer.class

How to process multiple outputs from Transformer (Kafka streams DSL)

I need two outputs from transformer. How to process multiple outputs from transformer with Kafka DSL? I'd like to get two KStream with different types after transform().
someMethod(KStream<String, Transaction> transaction) {
transaction
.transform(()-> new MyTransformer(...))
// what can I do here?
}
public class MyTransformer implements Transformer<...> {
public KeyValue<String, Aggregator> transform(String key, Integer value) {
if (...) {
context.forward(key1, new A(...), To.child("first_child"))
} else {
context.forward(key2, new B(...), To.child("second_child"))
}
}
}

Dozer seeks xml configs instead of Java confiigs

I am working on a Spring Boot project with Spring Data Rest, Gradle and Oracle Express DB in which I use DozerBeanMapper to map entities to DTOs and vice versa, I use no xml configurations for Dozer, just Java ones:
#Slf4j
#Configuration
public class DozerConfig {
#Bean
public DozerBeanMapper getDozerMapper() {
log.info("Initializing DozerBeanMapper bean");
return new DozerBeanMapper();
}
}
Also for clarity I have explicitly configured all the fields that has to be mapped although they are all with the same names. For example my Client mapper:
#Slf4j
#Component
public class ClientMapper extends BaseMapper {
private BeanMappingBuilder builder = new BeanMappingBuilder() {
#Override
protected void configure() {
mapping(Client.class, ClientDTO.class)
.fields("id", "id")
.fields("name", "name")
.fields("midName", "midName")
.fields("surname", "surname")
.exclude("password")
.fields("phone", "phone")
.fields("email", "email")
.fields("address", "address")
.fields("idCardNumber", "idCardNumber")
.fields("idCardIssueDate", "idCardIssueDate")
.fields("idCardExpirationDate", "idCardExpirationDate")
.fields("bankAccounts", "bankAccounts")
.fields("accountManager", "accountManager")
.fields("debitCardNumber", "debitCardNumber")
.fields("creditCardNumber", "creditCardNumber")
.fields("dateCreated", "dateCreated")
.fields("dateUpdated", "dateUpdated");
}
};
#Autowired
public ClientMapper(DozerBeanMapper mapper) {
super(mapper);
mapper.addMapping(builder);
}
public ClientDTO toDto(Client entity) {
log.info("Mapping Client entity to DTO");
return mapper.map(entity, ClientDTO.class);
}
public Client toEntity(ClientDTO dto) {
log.info("Mapping Client DTO to entity");
return mapper.map(dto, Client.class);
}
public List<ClientDTO> toDtos(List<Client> entities) {
log.info("Mapping Client entities to DTOs");
return entities.stream()
.map(entity -> toDto(entity))
.collect(Collectors.toList());
}
public List<Client> toEntities(List<ClientDTO> dtos) {
log.info("Mapping Client DTOs to entities");
return dtos.stream()
.map(dto -> toEntity(dto))
.collect(Collectors.toList());
}
public EmployeeDTO toEmployeeDto(Employee entity) {
log.info("Mapping Employee entity to DTO");
return mapper.map(entity, EmployeeDTO.class);
}
public Employee toEmployeeEntity(EmployeeDTO dto) {
log.info("Mapping Employee DTO to entity");
return mapper.map(dto, Employee.class);
}
public List<EmployeeDTO> toEmployeeDtos(List<Employee> entities) {
log.info("Mapping Employee entities to DTOs");
return entities.stream()
.map(entity -> toEmployeeDto(entity))
.collect(Collectors.toList());
}
public List<Employee> toEmployeeEntities(List<EmployeeDTO> dtos) {
log.info("Mapping Employee DTOs to entities");
return dtos.stream()
.map(dto -> toEmployeeEntity(dto))
.collect(Collectors.toList());
}
}
Despite this I get the following exception:
"httpStatus": "500 Internal Server Error",
"exception": "java.lang.IllegalArgumentException",
"message": "setAttribute(name, value):\n name: "http://apache.org/xml/features/validation/schema\"\n value: \"true\"",
"stackTrace": [
"oracle.xml.jaxp.JXDocumentBuilderFactory.setAttribute(JXDocumentBuilderFactory.java:289)",
"org.dozer.loader.xml.XMLParserFactory.createDocumentBuilderFactory(XMLParserFactory.java:71)",
"org.dozer.loader.xml.XMLParserFactory.createParser(XMLParserFactory.java:50)",
"org.dozer.loader.xml.MappingStreamReader.<init>(MappingStreamReader.java:43)",
"org.dozer.loader.xml.MappingFileReader.<init>(MappingFileReader.java:44)",
"org.dozer.DozerBeanMapper.loadFromFiles(DozerBeanMapper.java:219)",
"org.dozer.DozerBeanMapper.loadCustomMappings(DozerBeanMapper.java:209)",
"org.dozer.DozerBeanMapper.initMappings(DozerBeanMapper.java:315)",
"org.dozer.DozerBeanMapper.getMappingProcessor(DozerBeanMapper.java:192)",
"org.dozer.DozerBeanMapper.map(DozerBeanMapper.java:120)",
"com.rosenhristov.bank.exception.mapper.ClientMapper.toDto(ClientMapper.java:52)",
"com.rosenhristov.bank.service.ClientService.lambda$getClientById$0(ClientService.java:27)",
"java.base/java.util.Optional.map(Optional.java:265)",
"com.rosenhristov.bank.service.ClientService.getClientById(ClientService.java:27)",
"com.rosenhristov.bank.controller.ClientController.getClientById(ClientController.java:57)",
"java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)",
"java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)",
"java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)",
"java.base/java.lang.reflect.Method.invoke(Method.java:566)",
"org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:197)",
"org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:141)",
"org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106)",
"org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:893)",
"org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:807)",
"org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)",
"org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1061)",
"org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:961)",
"org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)",
"org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)",
"javax.servlet.http.HttpServlet.service(HttpServlet.java:626)",
"org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)",
"javax.servlet.http.HttpServlet.service(HttpServlet.java:733)",
"org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)",
"org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)",
"org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)",
"org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)",
"org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)",
"org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)",
"org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)",
"org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)",
"org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)",
"org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)",
"org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)",
"org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)",
"org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)",
"org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:93)",
"org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)",
"org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)",
"org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)",
"org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)",
"org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)",
"org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)",
"org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)",
"org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)",
"org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)",
"org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)",
"org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)",
"org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)",
"org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)",
"org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)",
"org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:374)",
"org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)",
"org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)",
"org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590)",
"org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)",
"java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)",
"java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)",
"org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)",
"java.base/java.lang.Thread.run(Thread.java:834)"
]
It seems Dozer is trying to find some xml conffig file taking into account this:
"oracle.xml.jaxp.JXDocumentBuilderFactory.setAttribute(JXDocumentBuilderFactory.java:289)"
It seems it is searching for a xml validation schema (take a look at var1 on the picture below):
When I start the application I just saw the following in the IntelliJ console:
Trying to find Dozer configuration file: dozer.properties
2020-12-23 11:46:09.855 WARN 17488 --- [ restartedMain] org.dozer.config.GlobalSettings: Dozer configuration file not found: dozer.properties. Using defaults for all Dozer global properties.
Probably I have to search for dozer.properties and find out how to make it look for Java configurations?
Can someone help me, please? I searched for some solution in internet but I still haven't found a suitable one. I am new to Dozer, I have used Mapstruct before?
You can try my beanknife to generate the dto file automatically. It will has a read method to convert the entity to dto. Although no converter from dto to entity, I think you don't need it in most situation.
#ViewOf(value = Client.class, genName = "ClientDto", includePattern = ".*")
class ClientDtoConfiguration {}
Then it will generate a dto class named "ClientDto" with all the properties of Client.
Client client = ...
ClientDto clientDto = ClientDto.read(client);
List<Client> clients = ...
List<ClientDto> clientDtos = ClientDto.read(clients);
Then serialize the dtos instead of entities.

Spring cloud stream Confluent KStream Avro Consume

I'm trying to consume confluent avro message from kafka topic as Kstream with spring boot 2.0.
I was able to consume the message as MessageChannel but not as KStream.
#Input(ORGANIZATION)
KStream<String, Organization> organizationMessageChannel();
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
Exception:
Exception in thread
"pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3"
org.apache.kafka.streams.errors.StreamsException: stream-thread
[pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3] Failed to
rebalance. at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:860)
at
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Caused by: org.apache.kafka.streams.errors.StreamsException: Failed to
configure value serde class
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:859)
at
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.(AbstractProcessorContext.java:59)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.(ProcessorContextImpl.java:42)
at
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:134)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:404)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:365)
at
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:350)
at
org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:137)
at
org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:88)
at
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:259)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:264)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:295)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1111)
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
... 3 more Caused by: io.confluent.common.config.ConfigException:
Missing required configuration "schema.registry.url" which has no
default value. at
io.confluent.common.config.ConfigDef.parse(ConfigDef.java:243) at
io.confluent.common.config.AbstractConfig.(AbstractConfig.java:78)
at
io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.(AbstractKafkaAvroSerDeConfig.java:61)
at
io.confluent.kafka.serializers.KafkaAvroSerializerConfig.(KafkaAvroSerializerConfig.java:32)
at
io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:48)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer.configure(SpecificAvroSerializer.java:58)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde.configure(SpecificAvroSerde.java:107)
at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:855)
... 19 more
Based on the error I think I'm missing to configure the schema.registry.url for confluent.
I had a quick look at the sample here
Kind of bit lost on how to do the same with spring cloud stream using the streamListener
Does this need to be a separate configuration? or Is there a way to configure schema.registry.url in application.yml itself that confluent is looking for?
here is the code repo https://github.com/naveenpop/springboot-kstream-confluent
Organization.avsc
{
"namespace":"com.test.demo.avro",
"type":"record",
"name":"Organization",
"fields":[
{
"name":"orgId",
"type":"string",
"default":"null"
},
{
"name":"orgName",
"type":"string",
"default":"null"
},
{
"name":"orgType",
"type":"string",
"default":"null"
},
{
"name":"parentOrgId",
"type":"string",
"default":"null"
}
]
}
DemokstreamApplication.java
#SpringBootApplication
#EnableSchemaRegistryClient
#Slf4j
public class DemokstreamApplication {
public static void main(String[] args) {
SpringApplication.run(DemokstreamApplication.class, args);
}
#Component
public static class organizationProducer implements ApplicationRunner {
#Autowired
private KafkaProducer kafkaProducer;
#Override
public void run(ApplicationArguments args) throws Exception {
log.info("Starting: Run method");
List<String> names = Arrays.asList("blue", "red", "green", "black", "white");
List<String> pages = Arrays.asList("whiskey", "wine", "rum", "jin", "beer");
Runnable runnable = () -> {
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = names.get(new Random().nextInt(names.size()));
try {
this.kafkaProducer.produceOrganization(rPage, rName, "PARENT", "111");
} catch (Exception e) {
log.info("Exception :" +e);
}
};
Executors.newScheduledThreadPool(1).scheduleAtFixedRate(runnable ,1 ,1, TimeUnit.SECONDS);
}
}
}
KafkaConfig.java
#Configuration
public class KafkaConfig {
#Value("${spring.cloud.stream.schemaRegistryClient.endpoint}")
private String endpoint;
#Bean
public SchemaRegistryClient confluentSchemaRegistryClient() {
ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();
client.setEndpoint(endpoint);
return client;
}
}
KafkaConsumer.java
#Slf4j
#EnableBinding(KstreamBinding.class)
public class KafkaConsumer {
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION_INPUT) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("KStream Organization Received:" + organization1));
}
}
KafkaProducer.java
#EnableBinding(KstreamBinding.class)
public class KafkaProducer {
#Autowired
private KstreamBinding kstreamBinding;
public void produceOrganization(String orgId, String orgName, String orgType, String parentOrgId) {
try {
Organization organization = Organization.newBuilder()
.setOrgId(orgId)
.setOrgName(orgName)
.setOrgType(orgType)
.setParentOrgId(parentOrgId)
.build();
kstreamBinding.organizationOutputMessageChannel()
.send(MessageBuilder.withPayload(organization)
.setHeader(KafkaHeaders.MESSAGE_KEY, orgName)
.build());
} catch (Exception e){
log.error("Failed to produce Organization Message:" +e);
}
}
}
KstreamBinding.java
public interface KstreamBinding {
String ORGANIZATION_INPUT= "organizationInput";
String ORGANIZATION_OUTPUT= "organizationOutput";
#Input(ORGANIZATION_INPUT)
KStream<String, Organization> organizationInputMessageChannel();
#Output(ORGANIZATION_OUTPUT)
MessageChannel organizationOutputMessageChannel();
}
Update 1:
I applied the suggestion from dturanski here and the error vanished. However still not able to consume the message as KStream<String, Organization> no error in the console.
Update 2:
Applied the suggestion from sobychacko here and the message is consumable with empty values in the object.
I've made a commit to the GitHub sample to produce the message from spring boot itself and still getting it as empty values.
Thanks for your time on this issue.
The following implementation will not do what you are intending:
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
That log statement is only invoked once at the bootstrap phase. In order for this to work, you need to invoke some operations on the received KStream and then provide the logic there. For e.g. following works where I am providing a lambda expression on the foreach method call.
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("Organization Received:" + organization1));
}
You also have an issue in the configuration where you are wrongly assigning avro Serde for keys where it is actually a String. Change it like this:
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
With these changes, I get the logging statement each time I send something to the topic. However, there is a problem in your sending groovy script, I am not getting any actual data from your Organization domain, but I will let you figure that out.
Update on the issue with the empty Organization domain object
This happens because you have a mixed mode of serialization strategies going on. You are using Spring Cloud Stream's avro message converters on the producer side but on the Kafka Streams processor, using the Confluent avro Serdes. I just tried with the Confluent's serializers all the way from producers to processor and I was able to see the Organization domain on the outbound. Here is the modified configuration to make the serialization consistent.
spring:
application:
name: kstream
cloud:
stream:
schemaRegistryClient:
endpoint: http://localhost:8081
schema:
avro:
schema-locations: classpath:avro/Organization.avsc
bindings:
organizationInput:
destination: organization-updates
group: demokstream.org
consumer:
useNativeDecoding: true
organizationOutput:
destination: organization-updates
producer:
useNativeEncoding: true
kafka:
bindings:
organizationOutput:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: http://localhost:8081
streams:
binder:
brokers: localhost
configuration:
schema.registry.url: http://localhost:8081
commit:
interval:
ms: 1000
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
You can also remove the KafkaConfig class as wells as the EnableSchemaRegistryClient annotation from the main application class.
Try spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url: ...

Resources