I think I'm not grasping some basic concepts of Kafka here, so I'm hoping Stack maybe able to the help.
I've been trying to learn Kafka with Spring boot by following this GIT repo here:
I understand how to without avro take a Java class from one Microservice, send it to Kafka and consume / serialise it on another Microservice...however I hate that idea. As it means I must have an identical class on the other Microservice in terms of package location / name etc
So overall I've two questions here I guess.
I want to understand how I can share message across my spring boot microservices and map them to classes without copying said classes from one service to the other
I want to be able to consume from my Spring Kafka listeners messages created from another language say C#
Where I'm currently at is, I have the avro example from the repo above up and running along with my local kafka and Schema registry instance.
However if I create a duplicate class and call it UserTest (For example) and have it identical to the User class consumed here I get stacktraces like the following:
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [io.confluent.developer.User] to [io.confluent.developer.kafkaworkshop.streams.User] for GenericMessage [payload={"name": "vik", "age": 33}, headers={kafka_offset=6, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#54200a0e, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=vik, kafka_receivedPartitionId=1, kafka_receivedTopic=users12, kafka_receivedTimestamp=1611278455840, kafka_groupId=simple-consumer}]
Am I missing something exceptionally basic here? I thought that once the message was send in Avro format that it could be consume and mapped to another object which had the same fields...that way if the object was created in c#, the spring service would be able to interpret it no?
If anyone can help me that would be great....
Thanks!
Related
I have an application with file-supplier Srping Cloud module included. My workflow is like track file creating/modifying in particular directory->push events to Kafka topic if there are such events. FileSupplierConfiguration is used to configure this file supplier. But now I have to track one more directory and push events to another relevant Kafka topic. So, there is an issue, because there is no possibility to include multiple FileSupplierConfiguration in project for configuration another file supplier. I remember that one of the main principles of microservices for which spring-cloud-stream was designed for is you do one thing and do it well without affecting others, but it still the same microservice with same tracking/pushing functionality but for another directory and topic. Is there any possibility to add one more file supplier with relevant configuration with file-supplier module? Or is the best solution for this issue to run one more application instance with another configuration?
Yes, the best way is to have another instance of this application, but with its own specific configuration properties. This is really how these functions have been designed: microservice based on the convention on configuration. What you are asking really contradicts with Spring Boot expectations. Imaging you'd need to connect to several data bases. So, you only can have a single JdbcTemplate auto-configured. The rest is only possible manually. But better to have the same code base which relies on the auto-configuration and may apply different props.
I have 2 spring boot micro-services core and web:
The core service reacts to some event (EmployeeCreatedEvent) which is triggered by web.
The core service is using jackson serializer to serialize commands, queries, events and messages whereas the web service is using xstream serializer.
i am getting below error in core while handling EmployeeCreatedEvent triggered by web:
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character (’<’ (code 60)):
expected a valid value (JSON String, Number, Array, Object or token ‘null’, ‘true’ or ‘false’)
i am using below properties (jackson for core and default for web):
axon.serializer.general = jackson/default
axon.serializer.events = jackson/default
axon.serializer.messages = jackson/default
can someone suggest whether it is ok to use different serializer for same event in different services.
I agree with #Augusto here and you should make a decision about which serialization format you are going to use across all your services.
I am assuming you started with the default serializer (which is XStream and XML) and later on decided to move to Jackson (which is JSON).
In that case, there are 2 advices I can share with you:
You can write a Custom Serializer which have both implementations and try with both of them and see which one works, for example trying with XML and fallbacking to JSON.
Or you can have a Component which will listen to all Events from your EventStore, deserialize them using XStream and write them back to another EventStore using Jackson. In this case, for this migration period, you will have this component using 2 Event Streams (one for each EventStore) but after the migration is done your whole EventStore will be in JSON. This requires some work but is the best approach in my opinion and will save you a lot of time and pain in the future.
You can look more about configuring 2 sources here.
Suppose I have some third party library and I want to integrate it with Spring in order to be able to use it as a part of Spring transaction. I didn't find any relevant information on the Internet and looked into the source code of integrations of RabbitMQ and MyBatis libraries. As I understood from their source code I should implement org.springframework.transaction.PlatformTransactionManager and interact with TransactionSynchronizationManager. And there are two questions:
How Spring "know" about and instanciate implementations of PlatformTransactionManager?
Suppose there are two resources been used in transaction through RabbitTemplate and
JdbcTemplate. What will be first committed - changes in database or
messages sent?
Also, I would be really appreciate if somebody point me out to some guide or book about interactions with Spring internals.
You have to instantiate them yourself, like a DataSourceTransactionManager or a HibernateTransactionManager. Spring Boot does that for you under the hood, but with plain Spring you need to do it yourself.
What you want are distributed transactions (XADataSource), which are not possible with RabbitMQ.
For RabbitMQ you should read this here first: https://www.rabbitmq.com/confirms.html . Then make sure you understand transactions on the JDBC side. Then you can reason about how they both work together.
For the Java side, you might enjoy this book entirely about transactions: https://www.marcobehler.com/books/1-java-database-connections-transactions
Does anyone know hot to use Apache AVRO RPC with Spring boot? Every single AVRO implementation I have seen is hosted on a netty server.
You might be trying to achieve the same thing I'm trying to achieve -- speeding up json serialization with spring boot and spring web. Or maybe you just want to use avro? And my comment is a little late, since it's months after you posted. I have run across this information about using an avro message converter, so thought I would share it with you to see if it helps:
https://docs.spring.io/spring-cloud-stream/docs/Brooklyn.RELEASE/reference/html/contenttypemanagement.html#_schema_based_message_converters
Or did you already find it? If so, can you share whatever solution that you came up with? Our rest json serialization takes much longer than the whole rest of the operation and I would like to speed it up as much as possible.
I am facing a strange situation while using HornetQ.
My application architecture -
JMS provider : HornetQ (Standalone server, not used for anything else. I've created 2 queues on this server, say Q1 and Q2).
Producer : A web application deployed on a separate machine. This application creates instances of "ObjectMessage", passing a "Job" class instance as argument to the "ObjectMessage.setObject()" method and adds the message to Q1. Uses Spring JMS.
I also set a string property named "AGENT" in the message before adding it to the queue.
What's peculiar is that if I call ObjectMessage.setStringProperty("AGENT", null) or if I do not add the property to the message itself, the message does not get added to Q1. However, this does not happen on Q2, and I'm able to see the message in HornetQ's JMX console.
Is there some queue specific configuration that I should be looking out for?
Apologies for the loose wording - My team and I have been facing a tough time trying to fix this issue.
Thanks.
How are you creating the Producer? and How are you sending it?
It seems you're not committing on a transactional session?
I'm assuming you are using JMS, but I would need to see some code to help you in a better fashion. Usually the JBoss Forum is a better suitable place for discussions like this, since the SOF is not really a discussion forum.
I think the best would be you open a forum on JBoss (since it will be followed by a discussion) and provide the link here.