I am working on project where I need to validate consumer group is created on topic or not. Is there any way in boldSpring Kafkabold to validate it
Currently, I haven't seen describeConsumerGroups supported in Spring-Kafka KafkaAdmin. So, you may need to create a Kafka AdminClient and call the method by yourself.
E.g: Here, I took advantage of the auto-configuration property class KafkaProperties and autowired it to the service.
#Service
public class KafkaBrokerService implements BrokerService {
private Map<String, Object> configs;
public KafkaBrokerService(KafkaProperties kafkaProperties) { // Autowired
this.configs = kafkaProperties.buildAdminProperties();
}
private AdminClient createAdmin() {
Map<String, Object> configs2 = new HashMap<>(this.configs);
return AdminClient.create(configs2);
}
public SomeDto consumerGroupDescription(String groupId) {
try (AdminClient adminClient = createAdmin()) {
// ConsumerGroup's members
ConsumerGroupDescription consumerGroupDescription = adminClient.describeConsumerGroups(Collections.singletonList(groupId))
.describedGroups().get(groupId).get();
// ConsumerGroup's partitions and the committed offset in each partition
Map<TopicPartition, OffsetAndMetadata> offsets = adminClient.listConsumerGroupOffsets(groupId).partitionsToOffsetAndMetadata().get();
// When you get the information, you can validate it here.
...
} catch (ExecutionException | InterruptedException e) {
//
}
}
}
Related
I have this Spring boot application.properties
list1=valueA,valueB
list2=valueC
list3=valueD,valueE
topics=list1,list2,list3
What I'm trying to do is to use in the topics element of #KafkaListener annotation the values of the values of topics property
Using the expression
#KafkaListener(topics={"#{'${topics}'.split(',')}"})
I get list1,list2,list3 as separated string
How can I loop on this list in order to get valueA,valueB,valueC,valueD,valueE?
Edit: I must parse topics properties in order that #KafkaListener registers for consuming message from topics valueA,valueB,valueC, etc.
I read that is possible call a method in this way:
#KafkaListener(topics="#parse(${topics})")
So, I wrote this method:
public String[] parse(String s) {
ExpressionParser parser = new SpelExpressionParser();
return Arrays.stream(s.split(",").map(key -> (String)(parser.parse(key).getValue())).toArray(String[]::new);
}
But the parse method is not invoked
So, I tried directly to do this into annotations
in this way:
#KafkaListener(topics="#{Arrays.stream('${topics}'.split(',')).map(key->${key}).toArray(String[]::new)}")
But also this solution give me errors.
Edit 2:
Modifying in this way the method is invoked
#KafkaListener(topics="parse()")
#Bean
public String[] parse(String s) {
...
}
The problems is how to get "topics" props inside the method
You can't invoke arbitrary methods like that; you need to reference a bean #someBean.parse(...); using #parse requires registering a static method as a function.
However, this works for me and is much simpler:
list1=valueA,valueB
list2=valueC
list3=valueD,valueE
topics=${list1},${list2},${list3}
and
#KafkaListener(id = "so64390079", topics = "#{'${topics}'.split(',')}")
EDIT
If you can't use placeholders in topics, this works...
#SpringBootApplication
public class So64390079Application {
public static void main(String[] args) {
SpringApplication.run(So64390079Application.class, args);
}
#KafkaListener(id = "so64390079", topics = "#{#parser.parse('${topics}')}")
public void listen(String in) {
System.out.println(in);
}
}
#Component
class Parser implements EnvironmentAware {
private Environment environmment;
#Override
public void setEnvironment(Environment environment) {
this.environmment = environment;
}
public String[] parse(String[] topics) {
StringBuilder sb = new StringBuilder();
for (String topic : topics) {
sb.append(this.environmment.getProperty(topic));
sb.append(',');
}
return StringUtils.commaDelimitedListToStringArray(sb.toString().substring(0, sb.length() - 1));
}
}
We have a Spring Java application using RabbitMQ, and here is the scenario:
There is a consumer receiving messages from a queue and sending them to another one. We are using "SimpleRabbitListenerContainerFactory" as the container factory, but when sending the messages to the other queue inside a "parallelStream" we've got an IllegalStateException "Cannot determine target ConnectionFactory for lookup key" Exception
When we remove the "parallelStream" it works flawlessly.
public void sendMessage(final StagingMessage stagingMessage, final Long timestamp, final String country) {
final List<TransformedMessage> messages = processMessageList(stagingMessage);
messages.parallelStream().forEach(message -> {
final TransformedMessage transformedMessage = buildMessage(timestamp, ApiConstants.POST_METHOD, country);
myMessageSender.sendQueue(country, transformedMessage);
});
}
Connectio Facotory, where the lookup key is set:
#Configuration
#EnableRabbit
public class RabbitBaseConfig {
#Autowired
private QueueProperties queueProperties;
#Bean
#Primary
public ConnectionFactory connectionFactory(final ConnectionFactory connectionFactoryA, final ConnectionFactory connectionFactoryB) {
final SimpleRoutingConnectionFactory simpleRoutingConnectionFactory = new SimpleRoutingConnectionFactory();
final Map<Object, ConnectionFactory> map = new HashMap<>();
for (final String queue : queueProperties.getAQueueMap().values()) {
map.put("[" + queue + "]", connectionFactoryA);
}
for (final String queue : queueProperties.getBQueueMap().values()) {
map.put("[" + queue + "]", connectionFactoryB);
}
simpleRoutingConnectionFactory.setTargetConnectionFactories(map);
return simpleRoutingConnectionFactory;
}
#Bean
public Jackson2JsonMessageConverter jackson2JsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
Welcome to stack overflow!
You should always show the pertinent code and configuration beans when asking questions like this.
I assume you are using the RoutingConnectionFactory.
It uses a ThreadLocal to store the lookup key so the send has to happen on the same thread that set the key.
You generally should never go asynchronous in a listener anyway; you risk message loss. To increase concurrency, use the concurrency properties on the container.
EDIT
One technique would be to convey the lookup key in a message header:
#Bean
public RabbitTemplate template(ConnectionFactory rcf) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(rcf);
Expression expression = new SpelExpressionParser().parseExpression("messageProperties.headers['cfSelector']");
rabbitTemplate.setSendConnectionFactorySelectorExpression(expression);
return rabbitTemplate;
}
#RabbitListener(queues = "foo")
public void listen1(String in) {
IntStream.range(0, 10)
.parallel()
.mapToObj(i -> in + i)
.forEach(val -> {
this.template.convertAndSend("bar", val.toUpperCase(), msg -> {
msg.getMessageProperties().setHeader("cfSelector", "[bar]");
return msg;
});
});
}
Thanks for reading ahead of time. In my main method I have a PublishSubscribeChannel
#Bean(name = "feeSchedule")
public SubscribableChannel getMessageChannel() {
return new PublishSubscribeChannel();
}
In a service that does a long running process it creates a fee schedule that I inject the channel into
#Service
public class FeeScheduleCompareServiceImpl implements FeeScheduleCompareService {
#Autowired
MessageChannel outChannel;
public List<FeeScheduleUpdate> compareFeeSchedules(String oldStudyId) {
List<FeeScheduleUpdate> sortedResultList = longMethod(oldStudyId);
outChannel.send(MessageBuilder.withPayload(sortedResultList).build());
return sortedResultList;
}
}
Now this is the part I'm struggling with. I want to use completable future and get the payload of the event in the future A in another spring bean. I need future A to return the payload from the message. I think want to create a ServiceActivator to be the message end point but like I said, I need it to return the payload for future A.
#org.springframework.stereotype.Service
public class SFCCCompareServiceImpl implements SFCCCompareService {
#Autowired
private SubscribableChannel outChannel;
#Override
public List<SFCCCompareDTO> compareSFCC(String state, int service){
ArrayList<SFCCCompareDTO> returnList = new ArrayList<SFCCCompareDTO>();
CompletableFuture<List<FeeScheduleUpdate>> fa = CompletableFuture.supplyAsync( () ->
{ //block A WHAT GOES HERE?!?!
outChannel.subscribe()
}
);
CompletableFuture<List<StateFeeCodeClassification>> fb = CompletableFuture.supplyAsync( () ->
{
return this.stateFeeCodeClassificationRepository.findAll();
}
);
CompletableFuture<List<SFCCCompareDTO>> fc = fa.thenCombine(fb,(a,b) ->{
//block C
//get in this block when both A & B are complete
Object theList = b.stream().forEach(new Consumer<StateFeeCodeClassification>() {
#Override
public void accept(StateFeeCodeClassification stateFeeCodeClassification) {
a.stream().forEach(new Consumer<FeeScheduleUpdate>() {
#Override
public void accept(FeeScheduleUpdate feeScheduleUpdate) {
returnList new SFCCCompareDTO();
}
});
}
}).collect(Collectors.toList());
return theList;
});
fc.join();
return returnList;
}
}
Was thinking there would be a service activator like:
#MessageEndpoint
public class UpdatesHandler implements MessageHandler{
#ServiceActivator(requiresReply = "true")
public List<FeeScheduleUpdate> getUpdates(Message m){
return (List<FeeScheduleUpdate>) m.getPayload();
}
}
Your question isn't clear, but I'll try to help you with some info.
Spring Integration doesn't provide CompletableFuture support, but it does provide an async handling and replies.
See Asynchronous Gateway for more information. And also see Asynchronous Service Activator.
outChannel.subscribe() should come with the MessageHandler callback, by the way.
When trying to implement a Unit-test in a spring-boot application, I can't retrieve a ConsumerRecord, though a custom Serializer using an own POJO is working. I checked it with the kafka-console-consumer, where a new message is each and every time I run the test generated and appears on the console.
What do I have to do to get the record instead of a null?
#RunWith(SpringRunner.class)
#SpringBootTest
#DisplayName("Testing GlobalMessageTest")
#DirtiesContext
public class NumberPlateSenderTest {
private static Logger log = LogManager.getLogger(NumberPlateSenderTest.class);
#Autowired
KafkaeskAdapterApplication kafkaeskAdapterApplication;
#Autowired
private NumberPlateSender numberPlateSender;
private KafkaMessageListenerContainer<String, NumberPlate> container;
private BlockingQueue<ConsumerRecord<String, NumberPlate>> records;
private static final String SENDER_TOPIC = "numberplate_test_topic";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
#Before
public void setUp() throws Exception {
// set up the Kafka consumer properties
Map<String, Object> consumerProperties = KafkaTestUtils.consumerProps("sender", "false", embeddedKafka);
consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, NumberPlateDeserializer.class);
// create a Kafka consumer factory
DefaultKafkaConsumerFactory<String, NumberPlate> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerProperties);
// set the topic that needs to be consumed
ContainerProperties containerProperties = new ContainerProperties(SENDER_TOPIC);
// create a Kafka MessageListenerContainer
container = new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
// create a thread safe queue to store the received message
records = new LinkedBlockingQueue<>();
// setup a Kafka message listener
container.setupMessageListener((MessageListener<String, NumberPlate>) record -> {
log.info("Message Listener received message='{}'", record.toString());
records.add(record);
});
// start the container and underlying message listener
container.start();
// wait until the container has the required number of assigned partitions
ContainerTestUtils.waitForAssignment(container, embeddedKafka.getPartitionsPerTopic());
}
#DisplayName("Should send a Message to a Producer and retrieve it")
#Test
public void TestProducer() throws InterruptedException {
//Test instance of Numberplate to send
NumberPlate localNumberplate = new NumberPlate();
byte[] bytes = "0x33".getBytes();
localNumberplate.setImageBlob(bytes);
localNumberplate.setNumberString("ABC123");
log.info(localNumberplate.toString());
//Send it
numberPlateSender.sendNumberPlateMessage(localNumberplate);
//Retrieve it
ConsumerRecord<String, NumberPlate> received = records.poll(3, TimeUnit.SECONDS);
log.info("Received the following content of ConsumerRecord: {}", received);
if (received == null) {
assert false;
} else {
NumberPlate retrNumberplate = received.value();
Assert.assertEquals(retrNumberplate, localNumberplate);
}
}
#After
public void tearDown() {
// stop the container
container.stop();
}
}
The complete code can be seen at my github repository.
I read a load of different SO questions and searched the web, but can't find an approach what is wrong with my code. Other users posted similar problems but to no avail.
The kafka version which runs on my Craptop is kafka_2.11-1.0.1
The springframework kafka Client is of version 2.1.5.RELEASE
Your problem that you start consumer against embedded Kafka, but send data to the real one. I don't know what is your goal, but I made it working against an embedded Kafka like this:
#BeforeClass
public static void setup() {
System.setProperty("kafka.bootstrapAddress", embeddedKafka.getBrokersAsString());
}
I override your kafka.bootstrapAddress configuration property for the producer with the broker address provided by the embedded Kafka.
In this case I fail with the:
java.lang.AssertionError: expected: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}> but was: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Expected :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Actual :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
But that's just because you use this assertion:
Assert.assertEquals(retrNumberplate, localNumberplate);
Meanwhile your NumberPlate doesn't provide a proper equals() implementation. Something like this:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
NumberPlate that = (NumberPlate) o;
return Objects.equals(numberString, that.numberString) &&
Arrays.equals(imageBlob, that.imageBlob);
}
#Override
public int hashCode() {
int result = Objects.hash(numberString);
result = 31 * result + Arrays.hashCode(imageBlob);
return result;
}
Thank you for providing the whole project to play and reproduce! With the "question-answer-question-answer" game we would spend too much time here :-).
I am creating a Kafka Spring producer under Spring Boot which will send data to Kafka and then write to a database; I want all that work to be in one transaction. I am new to Kafka and no expert on Spring, and am having some difficulty. Any pointers much appreciated.
So far my code writes to Kafka successfully in a loop. I have not yet set up
the DB, but have proceeded to set up global transactioning by adding a transactionIdPrefix to the producerFactory in the configuration:
producerFactory.setTransactionIdPrefix("MY_SERVER");
and added #Transactional to the method that does the Kafka send. Eventually I plan to do my DB work in that same method.
Problem: the code runs great the first time. But if I stop the program, even cleanly, I find that the code hangs the 2nd time I run it as soon as it enters the #Transactional method. If I comment out the #Transactional, it enters the method but hangs on the kafa template send().
The problem seems to be the transaction ID. If I change the prefix and rerun, the program runs fine again the first time but hangs when I run it again, until a new prefix is chosen. Since after a restart the trans ID counter starts at zero, if the trans ID prefix does not change then the same trans ID will be used upon restart.
It seems to me that the original transID is still open on the server, and was never committed. (I can read the data off the topic using the console-consumer, but that will read uncommitted). But if that is the case, how do I get spring to commit the trans? I am thinking my coniguration must be wrong. Or-- is the issue possibly that trans ID's can never be reused? (In which case, how does one solve that?)
Here is my relevant code. Config is:
#SpringBootApplication
public class MYApplication {
#Autowired
private static ChangeSweeper changeSweeper;
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> producerFactory=new DefaultKafkaProducerFactory<>(configProps);
producerFactory.setTransactionIdPrefix("MY_SERVER");
return producerFactory;
}
#Bean
public KafkaTransactionManager<String, String> KafkaTransactionManager() {
return new KafkaTransactionManager<String, String>((producerFactory()));
}
#Bean(name="kafkaProducerTemplate")
public KafkaTemplate<String, String> kafkaProducerTemplate() {
return new KafkaTemplate<>(producerFactory());
}
And the method that does the transaction is:
#Transactional
public void send( final List<Record> records) {
logger.debug("sending {} records; batchSize={}; topic={}", records.size(),batchSize, kafkaTopic);
// Divide the record set into batches of size batchSize and send each batch with a kafka transaction:
for (int batchStartIndex = 0; batchStartIndex < records.size(); batchStartIndex += batchSize ) {
int batchEndIndex=Math.min(records.size()-1, batchStartIndex+batchSize-1);
List<Record> nextBatch = records.subList(batchStartIndex, batchEndIndex);
logger.debug("## batch is from " + batchStartIndex + " to " + batchEndIndex);
for (Record record : nextBatch) {
kafkaProducerTemplate.send( kafkaTopic, record.getKey().toString(), record.getData().toString());
logger.debug("Sending> " + record);
}
// I will put the DB writes here
}
This works fine for me no matter how many times I run it (but I have to run 3 broker instances on my local machine because transactions require that by default)...
#SpringBootApplication
#EnableTransactionManagement
public class So47817034Application {
public static void main(String[] args) {
SpringApplication.run(So47817034Application.class, args).close();
}
private final CountDownLatch latch = new CountDownLatch(2);
#Bean
public ApplicationRunner runner(Foo foo) {
return args -> {
foo.send("foo");
foo.send("bar");
this.latch.await(10, TimeUnit.SECONDS);
};
}
#Bean
public KafkaTransactionManager<Object, Object> KafkaTransactionManager(KafkaProperties properties) {
return new KafkaTransactionManager<Object, Object>(kafkaProducerFactory(properties));
}
#Bean
public ProducerFactory<Object, Object> kafkaProducerFactory(KafkaProperties properties) {
DefaultKafkaProducerFactory<Object, Object> factory =
new DefaultKafkaProducerFactory<Object, Object>(properties.buildProducerProperties());
factory.setTransactionIdPrefix("foo-");
return factory;
}
#KafkaListener(id = "foo", topics = "so47817034")
public void listen(String in) {
System.out.println(in);
this.latch.countDown();
}
#Component
public static class Foo {
#Autowired
private KafkaTemplate<Object, Object> template;
#Transactional
public void send(String go) {
this.template.send("so47817034", go);
}
}
}