EmbeddedKafka AdminClient shuts down before Spring app starts for tests - spring-boot

I'm trying to write integration tests for a Spring Kafka app (Spring Boot 2.0.6, Spring Kafka 2.1.10) and am seeing lots of instance of INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x166e432ebec0001 type:create cxid:0x5e zxid:0x24 txntype:-1 reqpath:n/a Error Path:/brokers/topics/my-topic/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/my-topic/partitions and various flavors of the path (/brokers, /brokers/topics, etc.) that show in the logs before the Spring app starts. The AdminClient then shuts down and this message is logged:
DEBUG org.apache.kafka.common.network.Selector - [SocketServer brokerId=0] Connection with /127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:547)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:483)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at kafka.network.Processor.poll(SocketServer.scala:575)
at kafka.network.Processor.run(SocketServer.scala:492)
at java.lang.Thread.run(Thread.java:748)
I'm using the #ClassRule startup option in the test like so:
#ClassRule
#Shared
private KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 'my-topic')
, autowiring a KafkaTemplate, and setting the Spring properties for the connection based on the embedded Kafka values:
def setupSpec() {
System.setProperty('spring.kafka.bootstrap-servers', embeddedKafka.getBrokersAsString());
System.setProperty('spring.cloud.stream.kafka.binder.zkNodes', embeddedKafka.getZookeeperConnectionString());
}
Once the Spring app starts, I can see again instance of the user-level KeeperException messages: o.a.z.server.PrepRequestProcessor : Got user-level KeeperException when processing sessionid:0x166e445836d0001 type:setData cxid:0x6b zxid:0x2b txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets.
Any idea where I'm going wrong here? I can provide other setup information and log messages but just took an educated guess on what may be most helpful initially.

I'm not familiar with Spock, but what I know that #KafkaListener method is invoked on its own thread, therefore you can't just assert it in the then: block directly.
You need to ensure somehow a blocking wait in your test-case.
I tried with the BlockingVariable against the real service not mock and I see in logs your println(message). But that BlockingVariable still doesn't work for me somehow:
#DirtiesContext
#SpringBootTest(classes = [KafkaIntTestApplication.class])
#ActiveProfiles('test')
class CustomListenerSpec extends Specification {
#ClassRule
#Shared
public KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, false, 'my-topic')
#Autowired
private KafkaTemplate<String, String> template
#SpyBean
private SimpleService service
final def TOPIC_NAME = 'my-topic'
def setupSpec() {
System.setProperty('spring.kafka.bootstrapServers', embeddedKafka.getBrokersAsString());
}
def 'Sample test'() {
given:
def testMessagePayload = "Test message"
def message = MessageBuilder.withPayload(testMessagePayload).setHeader(KafkaHeaders.TOPIC, TOPIC_NAME).build()
def result = new BlockingVariable<Boolean>(5)
service.handleMessage(_) >> {
result.set(true)
}
when: 'We put a message on the topic'
template.send(message)
then: 'the service should be called'
result.get()
}
}
And logs are like this:
2018-11-05 13:38:51.089 INFO 8888 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [my-topic-0, my-topic-1]
Test message
BlockingVariable.get() timed out after 5,00 seconds
at spock.util.concurrent.BlockingVariable.get(BlockingVariable.java:113)
at com.example.CustomListenerSpec.Sample test(CustomListenerSpec.groovy:54)
2018-11-05 13:38:55.917 INFO 8888 --- [ main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#11ebb1b6: startup date [Mon Nov 05 13:38:49 EST 2018]; root of context hierarchy
Also I had to add this dependency:
testImplementation "org.hamcrest:hamcrest-core"
UPDATE
OK. There real problem that MockConfig was not visible for the test context configuration and that #Import(MockConfig.class) does the trick. Where #Primary also gives us additional signal what bean to pick up for the injection in the test class.

#ArtemBilan's response set me on the right path so thanks to him for chiming in, and I was able to figure it out after looking into other BlockingVariable articles and examples. I used BlockingVariable in a mock's response instead of as a callback. When the mock's response is invoked, make it set the value to true, and the then block just does result.get() and the test passes.
#DirtiesContext
#ActiveProfiles('test')
#SpringBootTest
#Import(MockConfig.class)
class CustomListenerSpec extends TestSpecBase {
#ClassRule
#Shared
private KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, false, TOPIC_NAME)
#Autowired
private KafkaTemplate<String, String> template
#Autowired
private SimpleService service
final def TOPIC_NAME = 'my-topic'
def setupSpec() {
System.setProperty('spring.kafka.bootstrap-servers', embeddedKafka.getBrokersAsString());
}
def 'Sample test'() {
def testMessagePayload = "Test message"
def message = MessageBuilder.withPayload(testMessagePayload).setHeader(KafkaHeaders.TOPIC, TOPIC_NAME).build()
def result = new BlockingVariable<Boolean>(5)
service.handleMessage(_ as String) >> {
result.set(true)
}
when: 'We put a message on the topic'
template.send(message)
then: 'the service should be called'
result.get()
}
}

Related

Springboot test method involve kafka producer for integration test

I am testing a method which involve the use kafka as a producer.When I when the test i found that i just keep looping for waiting the consumer,which i have not set.
Here is method in service class:
public String Applyjob(int order_id,int apply_id){
//check order_id
DashBroad dashBroad=dashBroadRepository.findByOrder_id(order_id);
try{
if(dashBroad.getApplier_id().contains(userCoreService.findById(apply_id))){
return "you have already applied the job";
}
dashBroad.getApplier_id().add(userCoreService.getUser(apply_id)); //update the dashbroad
dashBroad.setApplier_id(dashBroad.getApplier_id());
dashBroadRepository.save(dashBroad);
//add in applications records in user entity
postApplication(apply_id,order_id);
//send notification
String notification="You have successfully applied for job id:"+order_id;
sendNotice(notification,apply_id,order_id);
return "successfully added";
} catch(IndexOutOfBoundsException exception){
return "the number of application exceed the limit";
}
}
//kafka producer
public void sendNotice(String notification,int apply_id,int order_id){
try{
LocalDateTime myDateObj = LocalDateTime.now();
DateTimeFormatter myFormatObj = DateTimeFormatter.ofPattern("dd-MM-yyyy HH:mm:ss");
String formattedDate = myDateObj.format(myFormatObj);
kafkaTemplate.send("notificationTopic",new NoticeRespond(
apply_id,formattedDate,notification
));
log.info(apply_id+"has applied job with id: "+order_id);}
catch (Exception exception){
log.error("cant found the consumer");
}
}
private void postApplication(int apply_id,int order_id){
try{
JobOrder job=jobService.findByOrderid(order_id);
User user=userCoreService.findById(apply_id);
user.getApplications().add(job);
System.out.println(job);
userCoreService.saveAndReturn(user);
log.info("add application");
}catch (IndexOutOfBoundsException exception){
String notification="You have already send to much of applications.Please delete some and try again:"+order_id;
sendNotice(notification,apply_id,order_id);
}
}
I am testing the apply job method, which involve the method of sendNotice(kafka producer method)
test code:
#SpringBootTest
#AutoConfigureMockMvc
class DashbroadServiceTest {
#Autowired
private DashbroadService dashbroadService;
#Autowired
private DashBroadRepository dashBroadRepository;
#Autowired
private UserRepository userRepository;
#Autowired
private JobRepository jobRepository;
#Autowired
private UserCoreService userCoreService;
#Test
#Transactional
void applyjob() {
List<User> list=new ArrayList<>();
User user1=new User(0,"admin","admin",null,null,"yl","sd"
,"434","dsf",null,4,2,new ArrayList<>());
User user2=new User(0,"alex","admin",null,null,"yl","sd"
,"434","dsf",null,4,2,new ArrayList<>());
userRepository.save(user1);
userRepository.save(user2);
jobRepository.save(new JobOrder(0,1,"sda",null,null,null,0,3,false,0,null));
Assertions.assertEquals("admin",userCoreService.findById(1).getUsername());
dashBroadRepository.save(new DashBroad(0,1,1,2,list,list));
String res=dashbroadService.Applyjob(1,2);
Assertions.assertEquals("successfully added",res);
}
Log:
-02-12T02:26:17.457+08:00 WARN 15971 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
2023-02-12T02:26:17.659+08:00 INFO 15971 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Node -1 disconnected.
2023-02-12T02:26:18.873+08:00 WARN 15971 --
It just loop the above code,but when i stop it , it pass due to the catch method.And my question is can i just set runtime error on it and let it catch the error,or build a mockconsumer for kafka or is there any method i can just ignore the part of kafka.Please help
The producer sends messages to Kafka independently of the consumers. Why do you think that the problem is waiting for the consumer? You probably didn't set up Kafka configuration for the test and kafkaTemplate can't connect to it.
First of all you can delegate the work of sending a message to a separate KafkaSender service using the Single Responsibility Principle (move the sendNotice method to a new KafkaSender class).
#Service
#AllArgsConstructor
public class KafkaSender {
private final KafkaTemplate<String, Object> kafkaTemplate;
public void sendNotice(String notification, int apply_id, int order_id) {
// ...
}
}
This will make it easier to test the current complex DashbroadService class.
Next, what kind of test do you want to write?
If you want to write a Unit test without Kafka, then just mock this KafkaSender bean in the test for Spring context:
#SpringBootTest
#AutoConfigureMockMvc
class DashbroadServiceTest {
// ...
#MockBean
private KafkaSender kafkaSender;
// ...
}
You will also be able to verify the calls to this mocked kafkaSender bean via Mockito.verify(...) if needed.
If you want to write an Integration or E2E test with Kafka, then use Embedded Kafka or Kafka with TestContainers (doc). In this case, you can configure the producer to connect to a running Kafka. You can also programmatically create a consumer for additional validation of messages in topics (it is not necessary to send messages through the Spring kafkaTemplate).

How to set up Spring Kafka test using EmbeddedKafkaRule/ EmbeddedKafka to fix TopicExistsException Intermittent Error?

I have been having issues with testing my Kafka consumer and producer. The integration tests fail intermittently with TopicExistsException.
This is how my current test class - UserEventListenerTest looks like for one of the consumers:
#SpringBootTest(properties = ["application.kafka.user-event-topic=user-event-topic-UserEventListenerTest",
"application.kafka.bootstrap=localhost:2345"])
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class UserEventListenerTest {
private val logger: Logger = LoggerFactory.getLogger(javaClass)
#Value("\${application.kafka.user-event-topic}")
private lateinit var userEventTopic: String
#Autowired
private lateinit var kafkaConfigProperties: KafkaConfigProperties
private lateinit var embeddedKafka: EmbeddedKafkaRule
private lateinit var sender: KafkaSender<String, UserEvent>
private lateinit var receiver: KafkaReceiver<String, UserEvent>
#BeforeAll
fun setup() {
embeddedKafka = EmbeddedKafkaRule(1, false, userEventTopic)
embeddedKafka.kafkaPorts(kafkaConfigProperties.bootstrap.substringAfterLast(":").toInt())
embeddedKafka.before()
val producerProps: HashMap<String, Any> = hashMapOf(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG to kafkaConfigProperties.bootstrap,
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG to "org.apache.kafka.common.serialization.StringSerializer",
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG to "com.project.userservice.config.MockAvroSerializer"
)
val senderOptions = SenderOptions.create<String, UserEvent>(producerProps)
sender = KafkaSender.create(senderOptions)
val consumerProps: HashMap<String, Any> = hashMapOf(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG to kafkaConfigProperties.bootstrap,
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG to "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG to kafkaConfigProperties.deserializer,
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to "earliest",
"schema.registry.url" to kafkaConfigProperties.schemaRegistry,
ConsumerConfig.GROUP_ID_CONFIG to "test-consumer"
)
val receiverOptions = ReceiverOptions.create<String, UserEvent>(consumerProps)
.subscription(Collections.singleton("some-topic-after-UserEvent"))
receiver = KafkaReceiver.create(receiverOptions)
}
}
// Some tests
// Not shown as they are irrelevant
...
...
...
The UserEventListener class consumes a message from user-event-topic-UserEventListenerTest and publishes a message to some-topic-after-UserEvent.
As you can see from the setup, I have a test producer that will publish a message to user-event-topic-UserEventListenerTest so that I can test whether UserEventListener consumes the message and a test consumer that will consume the message from the some-topic-after-UserEvent so that I can see if UserEventListener publishes a message to some-topic-after-UserEvent after processing the record.
The KafkaConfigProperties class is as follows.
#Component
#ConfigurationProperties(prefix = "application.kafka")
data class KafkaConfigProperties(
var bootstrap: String = "",
var schemaRegistry: String = "",
var deserializer: String = "",
var userEventTopic: String = "",
)
And the application.yml looks like this.
application:
kafka:
user-event-topic: "platform.user-events.v1"
bootstrap: "localhost:9092"
schema-registry: "http://localhost:8081"
deserializer: com.project.userservice.config.MockAvroDeserializer
Error logs
com.project.userservice.user.UserEventListenerTest > initializationError FAILED
kafka.common.KafkaException:
at org.springframework.kafka.test.EmbeddedKafkaBroker.createTopics(EmbeddedKafkaBroker.java:354)
at org.springframework.kafka.test.EmbeddedKafkaBroker.lambda$createKafkaTopics$4(EmbeddedKafkaBroker.java:341)
at org.springframework.kafka.test.EmbeddedKafkaBroker.doWithAdmin(EmbeddedKafkaBroker.java:368)
at org.springframework.kafka.test.EmbeddedKafkaBroker.createKafkaTopics(EmbeddedKafkaBroker.java:340)
at org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:284)
at org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:114)
at com.project.userservice.user.UserEventListenerTest.setup(UserEventListenerTest.kt:62)
Caused by:
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicExistsException: Topic 'user-event-topic-UserEventListenerTest' already exists.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at org.springframework.kafka.test.EmbeddedKafkaBroker.createTopics(EmbeddedKafkaBroker.java:351)
... 6 more
Caused by:
org.apache.kafka.common.errors.TopicExistsException: Topic 'user-event-topic-UserEventListenerTest' already exists.
What I have tried:
Use different bootstrap server address in each test by specifying the bootstrap configuration, e.g. #SpringBootTest(properties = ["application.kafka.bootstrap=localhost:2345"])
Use different topic names in each test by overwriting the topic configuration via #SpringBootTest just like the bootstrap server overwrite in the previous bullet point
Add #DirtiesContext to each test class
Package versions
Kotlin 1.3.61
Spring Boot - 2.2.3.RELEASE
io.projectreactor.kafka:reactor-kafka:1.2.2.RELEASE
org.springframework.kafka:spring-kafka-test:2.3.4.RELEASE (test implementation only)
Problem
I have multiple test classes that use EmbeddedKafkaRule and are set up more or less the same away. For each of them, I specify different kafka bootstrap server address and topic names, but I still see the TopicAlreadyExists exceptions intermittently.
What can I do to make my test classes consistent?
I specify different kafka bootstrap server address and topic names, but I still see the TopicAlreadyExists exceptions intermittently
That makes no sense; if they have a new port each time, and especially new topic names, it's impossible for the topic(s) to already exist.
Some suggestions:
Since you are using JUnit5, don't use the JUnit4 EmbeddedKafkaRule, use EmbeddedKafkaBroker instead; or simply add #EmbeddedKafka and the broker will be added as a bean to the Spring application context and its life cycle managed by Spring (use #DirtiesContext to destroy); for non-Spring tests, the broker will be created (and destroyed) by the JUnit5 EmbeddedKafkaCondition and is available via EmbeddedKafkaCondition.getBroker().
Don't use explicit ports; let the broker use its default random port and use embeddedKafka.getBrokersAsString() for the bootstrap servers property.
If you must manage the brokers yourself (in #BeforeAll), destroy() them in #AfterAll.

Spring Cloud Messaging Source is not sending messages to Kafka broker

I am following the 'Spring Microservices In Action' book, with some small deviations from the format chosen by the author. Namely, I am using Kotlin and Gradle rather than Java and Maven. Other than that, I am mostly following the code as presented.
In the chapter on Messaging I am running into a problem - I cannot publish a message using the Source class I am autowiring into my SimpleSourceBean.
I know the general setup is OK, as the Kafka topic is created, and on application startup I see the corresponding log messages. I've tried autowiring the source explicitly in the class body as well as in the constructor, but no success in either case
Application class
#SpringBootApplication
#EnableEurekaClient
#EnableBinding(Source::class)
#EnableCircuitBreaker
class OrganizationServiceApplication {
#Bean
#LoadBalanced
fun getRestTemplate(): RestTemplate {
val restTemplate = RestTemplate()
val interceptors = restTemplate.interceptors
interceptors.add(UserContextInterceptor())
restTemplate.interceptors = interceptors
return restTemplate
}
}
fun main(args: Array<String>) {
runApplication<OrganizationServiceApplication>(*args)
}
This is the SimpleSourceBean implementation:
#Component
class SimpleSourceBean {
#Autowired
lateinit var source: Source
val logger = LoggerFactory.getLogger(this.javaClass)
fun publishOrgChange(action: String, orgId: String) {
logger.debug("Sending Kafka message $action for Organization $orgId on source ${source}")
val change = OrganizationChangeModel(
OrganizationChangeModel::class.java.typeName,
action,
orgId,
UserContext.correlationId!!)
logger.debug("change message: $change")
source.output()
.send(MessageBuilder
.withPayload(change)
.build())
logger.debug("Sent Kafka message $action for Organization $orgId successfully")
}
}
and this is the Service class that uses the SimpleSourceBean to send the message to Kafka:
#Component
class OrganizationService {
#Autowired
lateinit var organizationRepository: OrganizationRepository
#Autowired
lateinit var simpleSourceBean: SimpleSourceBean
val logger = LoggerFactory.getLogger(OrganizationService::class.java)
// some omissions for brevity
#HystrixCommand(
fallbackMethod = "fallbackUpdate",
commandKey = "updateOrganizationCommandKey",
threadPoolKey = "updateOrganizationThreadPool")
fun updateOrganization(organizationId: String, organization: Organization): Organization {
val updatedOrg = organizationRepository.save(organization)
simpleSourceBean.publishOrgChange("UPDATE", organizationId)
return updatedOrg
}
private fun fallbackUpdate(organizationId: String, organization: Organization) =
Organization(id = "000-000-00", name = "update not saved", contactEmail = "", contactName = "", contactPhone = "")
#HystrixCommand
fun saveOrganization(organization: Organization): Organization {
val orgToSave = organization.copy(id = UUID.randomUUID().toString())
val savedOrg = organizationRepository.save(orgToSave)
simpleSourceBean.publishOrgChange("SAVE", savedOrg.id)
return savedOrg
}
}
The log messages
organizationservice_1 | 2019-08-23 23:15:33.939 DEBUG 18 --- [ionThreadPool-2] S.O.events.source.SimpleSourceBean : Sending Kafka message UPDATE for Organization e254f8c-c442-4ebe-a82a-e2fc1d1ff78a on source null
organizationservice_1 | 2019-08-23 23:15:33.940 DEBUG 18 --- [ionThreadPool-2] S.O.events.source.SimpleSourceBean : change message: OrganizationChangeModel(type=SpringMicroservicesInAction.OrganizationService.events.source.OrganizationChangeModel, action=UPDATE, organizationId=e254f8c-c442-4ebe-a82a-e2fc1d1ff78a, correlationId=c84d288f-bfd6-4217-9026-8a45eab058e1)
organizationservice_1 | 2019-08-23 23:15:33.941 DEBUG 18 --- [ionThreadPool-2] o.s.c.s.m.DirectWithAttributesChannel : preSend on channel 'output', message: GenericMessage [payload=OrganizationChangeModel(type=SpringMicroservicesInAction.OrganizationService.events.source.OrganizationChangeModel, action=UPDATE, organizationId=e254f8c-c442-4ebe-a82a-e2fc1d1ff78a, correlationId=c84d288f-bfd6-4217-9026-8a45eab058e1), headers={id=05799740-f8cf-85f8-54f8-74fce2679909, timestamp=1566602133941}]
organizationservice_1 | 2019-08-23 23:15:33.945 DEBUG 18 --- [ionThreadPool-2] tractMessageChannelBinder$SendingHandler : org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler#38675bb5 received message: GenericMessage [payload=byte[224], headers={contentType=application/json, id=64e1e8f1-45f4-b5e6-91d7-c2df28b3d6cc, timestamp=1566602133943}]
organizationservice_1 | 2019-08-23 23:15:33.946 DEBUG 18 --- [ionThreadPool-2] nder$ProducerConfigurationMessageHandler : org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler#763a88a received message: GenericMessage [payload=byte[224], headers={contentType=application/json, id=7be5d188-5309-cba9-8297-74431c410152, timestamp=1566602133945}]
There are no further messages logged, which includes the final DEBUG log statement of the SimpleSourceBEan
Checking inside the Kafka container if there are any messages on the 'orgChangeTopic' topic, it comes up empty:
root#99442804288f:/opt/kafka_2.11-0.10.1.0/bin# ./kafka-console-consumer.sh --from-beginning --topic orgChangeTopic --bootstrap-server 0.0.0.0:9092
Processed a total of 0 messages
Any pointer to where my problem might lie is greatly appreciated
edit:
adding the application.yml:
spring:
cloud:
stream:
bindings:
output:
destination: orgChangeTopic
content-type: application/json
kafka:
binder:
zkNodes: "http://kafkaserver:2181"
brokers: "http://kafkaserver:9092"
// omitting some irrelevant config
logging:
level:
org.apache.kafka: DEBUG
org.springframework.cloud: DEBUG
org.springframework.web: WARN
springmicroservicesinaction.organizationservice: DEBUG
excerpt of the build.gradle file with relevant dependencies:
dependencies {
// kotlin, spring boot, etc
implementation("org.springframework.cloud:spring-cloud-stream:2.2.0.RELEASE")
implementation("org.springframework.cloud:spring-cloud-starter-stream-kafka:2.2.0.RELEASE")
}
You need to show your application properties as well. Your kafka version is very old; 0.10.x.x doesn't support headers. What version of spring-cloud-stream are you using? Modern versions require a Kafka that supports headers (0.11 or preferably later - the current release is 2.3), unless you set the headerMode to none.
That said, I would expect to see an error message if we try to send headers to a version that doesn't support them.
implementation("org.springframework.cloud:spring-cloud-stream:2.2.0.RELEASE")
Also note that with modern versions, you no longer need
zkNodes: "http://kafkaserver:2181"
The kafka-clients version used by 2.2.0 supports topic provisioning via the Kafka broker directly and we no longer need to connect to zookeeper.

Report metrics during shutdown of spring-boot app

I have a shutdownhook which is successfully executed, but the metrics is not reported. Any advice is appreciated! I guess the issues can be
StatsDMetricWriter might be disposed before the shutdown hook? How can I verify? Or is there a way to ensure the ordering of the configured singletons?
The time gap between metric generation and app shutdown < configured delay. I tried spawning a new Thread with Thread.sleep(20000). But it didn't work
The code snippets are as follows:
public class ShutDownHook implements DisposableBean {
#Autowired
private MetricRegistry registry;
#Override
public void destroy() throws Exception {
registry.counter("appName.deployments.count").dec();
//Spawned new thread here with high sleep with no effect
}
}
My Metrics Configuration for dropwizard is as below:
#Bean
#ExportMetricReader
public MetricRegistryMetricReader metricsDWMetricReader() {
return new MetricRegistryMetricReader(metricRegistry);
}
#Bean
#ExportMetricWriter
public MetricWriter metricWriter() {
return new StatsdMetricWriter(app, host, port);
}
The reporting time delay is set as 1 sec:
spring.metrics.export.delay-millis=1000
EDIT:
The problem is as below:
DEBUG 10452 --- [pool-2-thread-1] o.s.b.a.m.statsd.StatsdMetricWriter : Failed to write metric. Exception: class java.util.concurrent.RejectedExecutionException, message: Task com.timgroup.statsd.NonBlockingUdpSender$2#1dd8867d rejected from java.util.concurrent.ThreadPoolExecutor -- looks like ThreadPoolExecutor is shutdown before the beans are shutdown.
Any Suggestions please?
EDIT
com.netflix.hystrix.contrib.metrics.eventstream.HystrixMetricsPoller.getCommandJson() has the following piece of code
json.writeNumberField("reportingHosts", 1); // this will get summed across all instances in a cluster
I'm not sure how/why the numbers will add up? Where can I find that logic?

spring cloud contract verification at deployment

I have extensively gone through SpringCloudContract. It is very effective TDD. I want to verify the contract during actual deployment. I have n number of micro-services (Spring stream:Source/Processor/Sink) and want to allow user to link them when they define a stream (kafka)in dataflow server dashboard. I am passing certain Object in the stream which act as
input/out for micro-service. I want to check the compatibility for micro-services and warn the user accordingly. SpringCloudContract facilitate to verify the contract during the develpment time and not a run time.
Kindly help.
I am new to Spring cloud contract, but I have found a way to start StubRunner but when it trigger the certificate I get following.
2017-04-26 16:14:10,373 INFO main c.s.s.ContractTester:36 - ContractTester : consumerMessageListener >>>>>>>>>>>>>>>>>>>>>>>>>>>>org.springframework.cloud.contract.stubrunner.BatchStubRunner#5e13f156
2017-04-26 16:14:10,503 ERROR main o.s.c.c.v.m.s.StreamStubMessages:63 - Exception occurred while trying to send a message [GenericMessage [payload={"name":"First","description":"Valid","value":1}, headers={id=49c6cc5c-93c8-2498-934a-175f60f42c03, timestamp=1493203450482}]] to a channel with name [verifications]
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'application.input'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload={"name":"First","description":"Valid","value":1}, headers={id=49c6cc5c-93c8-2498-934a-175f60f42c03, timestamp=1493203450482}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:93)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
at org.springframework.cloud.contract.verifier.messaging.stream.StreamStubMessages.send(StreamStubMessages.java:60)
at org.springframework.cloud.contract.verifier.messaging.stream.StreamStubMessages.send(StreamStubMessages.java:
The same work fine with Maven install, but not with main class.
...
#RunWith(SpringRunner.class)
#AutoConfigureMessageVerifier
#EnableAutoConfiguration
#EnableIntegration
#Component
#DirtiesContext
public class ContractTester {
private static Logger logger = LoggerFactory.getLogger(ContractTester.class);
#Autowired StubTrigger stubTrigger;
#Autowired ConsumerMessageListener consumerMessageListener;
#Bean
public boolean validSimpleObject() throws Exception {
logger.info("ContractTester : consumerMessageListener >>>>>>>>>>>>>>>>>>>>>>>>>>>>"+stubTrigger);
stubTrigger.trigger("accepted_message");
if(consumerMessageListener ==null) {
logger.info("ContractTester : consumerMessageListener >>>>>>>>>>>>>>>>>>>>>>>>>>>>");
}
logger.info("ContractTester >>>>>>>>>>>>>>>>>>>>>>>>>>>>" +consumerMessageListener.toString());
SimpleObject simpleObject = (SimpleObject) consumerMessageListener.getSimpleObject();
logger.info("simpleObject >>>>>>>>>>>>>>>>>>>>>>>>>>>>" +simpleObject.toString());
assertEquals(1, simpleObject.getValue());
//then(listener.eligibleCounter.get()).isGreaterThan(initialCounter);
return true;
}
}

Resources