How do I access header information in kafka materialized views? - spring

how do I access custom headers in a materialized view? I'm trying to build some custom dlq logic in my application and want to build a retry mechanism based on header information. The actual retry is triggered by a scheduler which should look up these header information in a materialized view.
Here are some code snippets:
Create Materialized View:
#Slf4j
#EnableBinding(DlqBinding.class)
public class DlqRetryService {
#StreamListener
public void readTable(#Input(DlqBinding.DLQ_TOPIC) KTable<String, String> table) {
}
}
public interface DlqBinding {
String DLQ_TOPIC = "dlq";
#Input(DLQ_TOPIC)
KTable<?, ?> dlqInput();
}
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: localhost:29092
configuration:
default:
key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
bindings:
dlq:
consumer:
materializedAs: currentDL
keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde
Scheduler:
public void processDL() {
ReadOnlyKeyValueStore<Object, Object> currentDL = interactiveQueryService.getQueryableStore("currentDL", QueryableStoreTypes.keyValueStore());
KeyValueIterator<Object, Object> all = currentDL.all();
while (all.hasNext()) {
KeyValue<Object, Object> next = all.next();
log.info("Found Entry in currentDL: {}", next);
// some retry logic would be here
}
}```

I don't think it is possible to access headers that way from the materialized view using interactive query.
Its not clear what you are retrying. Are you trying to re-process the records?
You can encapsulate this kind of logic by using a KStream and the transformer/processor API. Here is a blueprint of such a pattern. Keep in mind that this is a general way of accessing headers. Your application might need to adjust based on your particular use case.
#StreamListener
public void processStream(#Input(DlqBinding.DLQ_TOPIC) KStream<String, String> stream) {
stream.process(() -> new Processor() {
ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(Object key, Object value) {
final Headers headers = this.context.headers();
final Iterable<Header> headerXyz = headers
.headers("HEADER_KEY");
// iterate on the header information returned and perform
// your application specific logic here.
}
#Override
public void close() {
}
});
}

Related

java.lang.IllegalArgumentException: A consumerFactory is required

This configuration existed before my changes:
#Configuration
public class KafkaConfiguration {
#Value(value = "${kafka.bootstrapServers:XXX}")
private String bootstrapServers;
#Value(value = "${kafka.connect.url:XXX}")
public String kafkaConnectUrl;
#Bean
public AdminClient adminClient() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return AdminClient.create(configs);
}
#Bean
public WebProperties.Resources resources() {
return new WebProperties.Resources();
}
#Bean
public KafkaConnectClient connectClient() {
return new KafkaConnectClient(new org.sourcelab.kafka.connect.apiclient.Configuration(kafkaConnectUrl));
}
}
I've setup the app so that I can use KafkaTemaplate:
application.yaml
spring:
profiles:
active: dev
kafka:
bootstrap-servers: XXX
properties:
schema.registry.url: XXX
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.protobuf.KafkaProtobufSerializer
consumer:
group-id: kafka-management-service-group-id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: io.confluent.kafka.serializers.protobuf.KafkaProtobufDeserializer
I can correctly write and read messages using KafkaTemplate from a topic with 1 partition that I created manually:
#GetMapping("/loadData")
public void loadData() {
IntStream.range(0, 1000000).forEach(key ->
{
final General general = General.newBuilder()
.setUuid(StringValue.newBuilder().setValue(String.valueOf(key))
.build()).build();
template.send("test",
String.valueOf(key),
ScheduledJob.newBuilder().setGeneral(general).build()
);
}
);
}
#GetMapping("/readData")
public void readData() {
Collection<TopicPartitionOffset> topicPartitionOffsets = new ArrayList<>();
final TopicPartitionOffset topicPartitionOffset = new TopicPartitionOffset("test", 0, SeekPosition.BEGINNING);
topicPartitionOffsets.add(topicPartitionOffset);
final Iterator<ConsumerRecord<String, ScheduledJob>> iterator = template.receive(topicPartitionOffsets).iterator();
while (iterator.hasNext()) {
final ConsumerRecord<String, ScheduledJob> next = iterator.next();
System.out.println("Key = "+next.key());
System.out.println("Value = "+next.value());
System.out.println();
}
}
Current task: I'd like to read all messages of a topic from a specific partition X (therefore I need the number of partitions), from a specific offet Y to a specific offset Z; and ignore the rest.
I've read that I can do something along these lines to get the partition number for a specific topic:
#Autowired
private final ConsumerFactory<Object, Object> consumerFactory;
public String[] partitions(String topic) {
try (Consumer<Object, Object> consumer = consumerFactory.createConsumer()) {
return consumer.partitionsFor(topic).stream()
.map(pi -> "" + pi.partition())
.toArray(String[]::new);
}
}
Problem: Despite the fact that I can read and write messages, a ConsumerFactory isn't created automatically.
Context: Our frontend needs to support paging, multi-column order and filtering. The source are Kafka messages from a generic topic with any number of partitions. Maybe this isn't the best way to do it, therefore I'm also open for suggestions
I don't see how it is possible to receive records without adding a consumer factory to the template; Boot does not do that. See KafkaTemplate.oneOnly(), which is called by the receive methods...
private Properties oneOnly() {
Assert.notNull(this.consumerFactory, "A consumerFactory is required");
Properties props = new Properties();
props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
return props;
}
Boot auto configures a consumer factory, but it does not add it to the template because using template to receive is an uncommon use case.
/**
* Set a consumer factory for receive operations.
* #param consumerFactory the consumer factory.
* #since 2.8
*/
public void setConsumerFactory(ConsumerFactory<K, V> consumerFactory) {
this.consumerFactory = consumerFactory;
}

How to intercept message republished to DLQ in Spring Cloud RabbitMQ?

I want to intercept messages that are republished to DLQ after retry limit is exhausted, and my ultimate goal is to eliminate x-exception-stacktrace header from those messages.
Config:
spring:
application:
name: sandbox
cloud:
function:
definition: rabbitTest1Input
stream:
binders:
rabbitTestBinder1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: localhost:55015
username: guest
password: guest
virtual-host: test
bindings:
rabbitTest1Input-in-0:
binder: rabbitTestBinder1
consumer:
max-attempts: 3
destination: ex1
group: q1
rabbit:
bindings:
rabbitTest1Input-in-0:
consumer:
autoBindDlq: true
bind-queue: true
binding-routing-key: q1key
deadLetterExchange: ex1-DLX
dlqDeadLetterExchange: ex1
dlqDeadLetterRoutingKey: q1key_dlq
dlqTtl: 180000
prefetch: 5
queue-name-group-only: true
republishToDlq: true
requeueRejected: false
ttl: 86400000
#Configuration
class ConsumerConfig {
companion object : KLogging()
#Bean
fun rabbitTest1Input(): Consumer<Message<String>> {
return Consumer {
logger.info("Received from test1 queue: ${it.payload}")
throw AmqpRejectAndDontRequeueException("FAILED") // force republishing to DLQ after N retries
}
}
}
First I tried to register #GlobalChannelInterceptor (like here), but since RabbitMessageChannelBinder uses its own private RabbitTemplate instance (not autowired) for republishing (see #getErrorMessageHandler) it doesn't get intercepted.
Then I tried to extend RabbitMessageChannelBinder class by throwing away the code related to x-exception-stacktrace and then declare this extension as a bean:
/**
* Forked from {#link org.springframework.cloud.stream.binder.rabbit.RabbitMessageChannelBinder} with the goal
* to eliminate {#link RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE} header from messages republished to DLQ
*/
class RabbitMessageChannelBinderWithNoStacktraceRepublished
: RabbitMessageChannelBinder(...)
// and then
#Configuration
#Import(
RabbitAutoConfiguration::class,
RabbitServiceAutoConfiguration::class,
RabbitMessageChannelBinderConfiguration::class,
PropertyPlaceholderAutoConfiguration::class,
)
#EnableConfigurationProperties(
RabbitProperties::class,
RabbitBinderConfigurationProperties::class,
RabbitExtendedBindingProperties::class
)
class RabbitConfig {
#Bean
#Primary
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
#Order(Ordered.HIGHEST_PRECEDENCE)
fun customRabbitMessageChannelBinder(
appCtx: ConfigurableApplicationContext,
... // required injections
): RabbitMessageChannelBinder {
// remove the original (auto-configured) bean. Explanation is after the code snippet
val registry = appCtx.autowireCapableBeanFactory as BeanDefinitionRegistry
registry.removeBeanDefinition("rabbitMessageChannelBinder")
// ... and replace it with custom binder. It's initialized absolutely the same way as original bean, but is of forked class
return RabbitMessageChannelBinderWithNoStacktraceRepublished(...)
}
}
But in this case my channel binder doesn't respect the YAML properties (e.g. addresses: localhost:55015) and uses default values (e.g. localhost:5672)
INFO o.s.a.r.c.CachingConnectionFactory - Attempting to connect to: [localhost:5672]
INFO o.s.a.r.l.SimpleMessageListenerContainer - Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused
On the other hand if I don't remove original binder from Spring context I get following error:
Caused by: java.lang.IllegalStateException: Multiple binders are available, however neither default nor per-destination binder name is provided. Available binders are [rabbitMessageChannelBinder, customRabbitMessageChannelBinder]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:145)
Could anyone give me a hint how to solve this problem?
P.S. I use Spring Cloud Stream 3.1.6 and Spring Boot 2.6.6
Disable the binder retry/DLQ configuration (maxAttempts=1, republishToDlq=false, and other dlq related properties).
Add a ListenerContainerCustomizer to add a custom retry advice to the advice chain, with a customized dead letter publishing recoverer.
Manually provision the DLQ using a Queue #Bean.
#SpringBootApplication
public class So72871662Application {
public static void main(String[] args) {
SpringApplication.run(So72871662Application.class, args);
}
#Bean
public Consumer<String> input() {
return str -> {
System.out.println();
throw new RuntimeException("test");
};
}
#Bean
ListenerContainerCustomizer<MessageListenerContainer> customizer(RetryOperationsInterceptor retry) {
return (cont, dest, grp) -> {
((AbstractMessageListenerContainer) cont).setAdviceChain(retry);
};
}
#Bean
RetryOperationsInterceptor interceptor(MessageRecoverer recoverer) {
return RetryInterceptorBuilder.stateless()
.maxAttempts(3)
.backOffOptions(3_000L, 2.0, 10_000L)
.recoverer(recoverer)
.build();
}
#Bean
MessageRecoverer recoverer(RabbitTemplate template) {
return new RepublishMessageRecoverer(template, "DLX", "errors") {
#Override
protected void doSend(#Nullable
String exchange, String routingKey, Message message) {
message.getMessageProperties().getHeaders().remove(RepublishMessageRecoverer.X_EXCEPTION_STACKTRACE);
super.doSend(exchange, routingKey, message);
}
};
}
#Bean
FanoutExchange dlx() {
return new FanoutExchange("DLX");
}
#Bean
Queue dlq() {
return new Queue("errors");
}
#Bean
Binding dlqb() {
return BindingBuilder.bind(dlq()).to(dlx());
}
}

Spring cloud stream Confluent KStream Avro Consume

I'm trying to consume confluent avro message from kafka topic as Kstream with spring boot 2.0.
I was able to consume the message as MessageChannel but not as KStream.
#Input(ORGANIZATION)
KStream<String, Organization> organizationMessageChannel();
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
Exception:
Exception in thread
"pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3"
org.apache.kafka.streams.errors.StreamsException: stream-thread
[pcs-7bb7b444-044d-41bb-945d-450c902337ff-StreamThread-3] Failed to
rebalance. at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:860)
at
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Caused by: org.apache.kafka.streams.errors.StreamsException: Failed to
configure value serde class
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:859)
at
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.(AbstractProcessorContext.java:59)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.(ProcessorContextImpl.java:42)
at
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:134)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:404)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:365)
at
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:350)
at
org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:137)
at
org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:88)
at
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:259)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:264)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:295)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1111)
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
... 3 more Caused by: io.confluent.common.config.ConfigException:
Missing required configuration "schema.registry.url" which has no
default value. at
io.confluent.common.config.ConfigDef.parse(ConfigDef.java:243) at
io.confluent.common.config.AbstractConfig.(AbstractConfig.java:78)
at
io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.(AbstractKafkaAvroSerDeConfig.java:61)
at
io.confluent.kafka.serializers.KafkaAvroSerializerConfig.(KafkaAvroSerializerConfig.java:32)
at
io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:48)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerializer.configure(SpecificAvroSerializer.java:58)
at
io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde.configure(SpecificAvroSerde.java:107)
at
org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:855)
... 19 more
Based on the error I think I'm missing to configure the schema.registry.url for confluent.
I had a quick look at the sample here
Kind of bit lost on how to do the same with spring cloud stream using the streamListener
Does this need to be a separate configuration? or Is there a way to configure schema.registry.url in application.yml itself that confluent is looking for?
here is the code repo https://github.com/naveenpop/springboot-kstream-confluent
Organization.avsc
{
"namespace":"com.test.demo.avro",
"type":"record",
"name":"Organization",
"fields":[
{
"name":"orgId",
"type":"string",
"default":"null"
},
{
"name":"orgName",
"type":"string",
"default":"null"
},
{
"name":"orgType",
"type":"string",
"default":"null"
},
{
"name":"parentOrgId",
"type":"string",
"default":"null"
}
]
}
DemokstreamApplication.java
#SpringBootApplication
#EnableSchemaRegistryClient
#Slf4j
public class DemokstreamApplication {
public static void main(String[] args) {
SpringApplication.run(DemokstreamApplication.class, args);
}
#Component
public static class organizationProducer implements ApplicationRunner {
#Autowired
private KafkaProducer kafkaProducer;
#Override
public void run(ApplicationArguments args) throws Exception {
log.info("Starting: Run method");
List<String> names = Arrays.asList("blue", "red", "green", "black", "white");
List<String> pages = Arrays.asList("whiskey", "wine", "rum", "jin", "beer");
Runnable runnable = () -> {
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = names.get(new Random().nextInt(names.size()));
try {
this.kafkaProducer.produceOrganization(rPage, rName, "PARENT", "111");
} catch (Exception e) {
log.info("Exception :" +e);
}
};
Executors.newScheduledThreadPool(1).scheduleAtFixedRate(runnable ,1 ,1, TimeUnit.SECONDS);
}
}
}
KafkaConfig.java
#Configuration
public class KafkaConfig {
#Value("${spring.cloud.stream.schemaRegistryClient.endpoint}")
private String endpoint;
#Bean
public SchemaRegistryClient confluentSchemaRegistryClient() {
ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();
client.setEndpoint(endpoint);
return client;
}
}
KafkaConsumer.java
#Slf4j
#EnableBinding(KstreamBinding.class)
public class KafkaConsumer {
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION_INPUT) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("KStream Organization Received:" + organization1));
}
}
KafkaProducer.java
#EnableBinding(KstreamBinding.class)
public class KafkaProducer {
#Autowired
private KstreamBinding kstreamBinding;
public void produceOrganization(String orgId, String orgName, String orgType, String parentOrgId) {
try {
Organization organization = Organization.newBuilder()
.setOrgId(orgId)
.setOrgName(orgName)
.setOrgType(orgType)
.setParentOrgId(parentOrgId)
.build();
kstreamBinding.organizationOutputMessageChannel()
.send(MessageBuilder.withPayload(organization)
.setHeader(KafkaHeaders.MESSAGE_KEY, orgName)
.build());
} catch (Exception e){
log.error("Failed to produce Organization Message:" +e);
}
}
}
KstreamBinding.java
public interface KstreamBinding {
String ORGANIZATION_INPUT= "organizationInput";
String ORGANIZATION_OUTPUT= "organizationOutput";
#Input(ORGANIZATION_INPUT)
KStream<String, Organization> organizationInputMessageChannel();
#Output(ORGANIZATION_OUTPUT)
MessageChannel organizationOutputMessageChannel();
}
Update 1:
I applied the suggestion from dturanski here and the error vanished. However still not able to consume the message as KStream<String, Organization> no error in the console.
Update 2:
Applied the suggestion from sobychacko here and the message is consumable with empty values in the object.
I've made a commit to the GitHub sample to produce the message from spring boot itself and still getting it as empty values.
Thanks for your time on this issue.
The following implementation will not do what you are intending:
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION)KStream<String, Organization> organization) {
log.info("Organization Received:" + organization);
}
That log statement is only invoked once at the bootstrap phase. In order for this to work, you need to invoke some operations on the received KStream and then provide the logic there. For e.g. following works where I am providing a lambda expression on the foreach method call.
#StreamListener
public void processOrganization(#Input(KstreamBinding.ORGANIZATION) KStream<String, Organization> organization) {
organization.foreach((s, organization1) -> log.info("Organization Received:" + organization1));
}
You also have an issue in the configuration where you are wrongly assigning avro Serde for keys where it is actually a String. Change it like this:
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
With these changes, I get the logging statement each time I send something to the topic. However, there is a problem in your sending groovy script, I am not getting any actual data from your Organization domain, but I will let you figure that out.
Update on the issue with the empty Organization domain object
This happens because you have a mixed mode of serialization strategies going on. You are using Spring Cloud Stream's avro message converters on the producer side but on the Kafka Streams processor, using the Confluent avro Serdes. I just tried with the Confluent's serializers all the way from producers to processor and I was able to see the Organization domain on the outbound. Here is the modified configuration to make the serialization consistent.
spring:
application:
name: kstream
cloud:
stream:
schemaRegistryClient:
endpoint: http://localhost:8081
schema:
avro:
schema-locations: classpath:avro/Organization.avsc
bindings:
organizationInput:
destination: organization-updates
group: demokstream.org
consumer:
useNativeDecoding: true
organizationOutput:
destination: organization-updates
producer:
useNativeEncoding: true
kafka:
bindings:
organizationOutput:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: http://localhost:8081
streams:
binder:
brokers: localhost
configuration:
schema.registry.url: http://localhost:8081
commit:
interval:
ms: 1000
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
You can also remove the KafkaConfig class as wells as the EnableSchemaRegistryClient annotation from the main application class.
Try spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url: ...

Micrometer filter is ignored with CompositeMeterRegistry

I use Spring Boot 2.1.2.RELEASE, and I try to use Micrometer with CompositeMeterRegistry. My goal is to publish some selected meters to ElasticSearch. The code below shows my sample config. The problem is, that the filter is completely ignored (so all metrics are sent to ElasticSearch), although I can see in the logs that it was processed ("filter reply of meter ..." lines).
Strangely, if I define the MeterFilter as a Spring bean, then it's applied to ALL registries (however, I want it to be applied only on "elasticMeterRegistry").
Here is a sample configuration class:
#Configuration
public class AppConfiguration {
#Bean
public ElasticConfig elasticConfig() {
return new ElasticConfig() {
#Override
#Nullable
public String get(final String k) {
return null;
}
};
}
#Bean
public MeterRegistry meterRegistry(final ElasticConfig elasticConfig) {
final CompositeMeterRegistry registry = new CompositeMeterRegistry();
registry.add(new SimpleMeterRegistry());
registry.add(new JmxMeterRegistry(new JmxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
#Nullable
public String get(String k) {
return null;
}
}, Clock.SYSTEM));
final ElasticMeterRegistry elasticMeterRegistry = new ElasticMeterRegistry(elasticConfig, Clock.SYSTEM);
elasticMeterRegistry.config().meterFilter(new MeterFilter() {
#Override
public MeterFilterReply accept(Meter.Id id) {
final MeterFilterReply reply =
id.getName().startsWith("logback")
? MeterFilterReply.NEUTRAL
: MeterFilterReply.DENY;
log.info("filter reply of meter {}: {}", id.getName(), reply);
return reply;
}
});
registry.add(elasticMeterRegistry);
return registry;
}
}
So, I expect ElasticSearch to receive only "logback" metrics, and JMX to receive all metrics.
UPDATE:
I have played with filters and found a "solution", but I don't really understand why the code above doesn't work.
This works:
elasticMeterRegistry.config().meterFilter(new MeterFilter() {
#Override
public MeterFilterReply accept(Meter.Id id) {
final MeterFilterReply reply =
id.getName().startsWith("logback")
? MeterFilterReply.ACCEPT
: MeterFilterReply.DENY;
log.info("filter reply of meter {}: {}", id.getName(), reply);
return reply;
}
});
The difference is: I return ACCEPT instead of NEUTRAL.
Strangely, the following code does not work (ES gets all metrics):
elasticMeterRegistry.config().meterFilter(
MeterFilter.accept(id -> id.getName().startsWith("logback")));
But this works:
elasticMeterRegistry.config().meterFilter(
MeterFilter.accept(id -> id.getName().startsWith("logback")));
elasticMeterRegistry.config().meterFilter(
MeterFilter.deny());
CONCLUSION:
So, it seems that instead of NEUTRAL, the filter should return ACCEPT. But for meters not starting with "logback", my original filter (with NEUTRAL) returns DENY. Then why are those metrics published to ElasticSearch registry?
Can someone explain this?
This is really a composite of questions. I'll just point out a few points.
For the MeterRegistry bean you defined, Spring Boot will auto-configure an ElasticMeterRegistry bean as there's no ElasticMeterRegistry bean. Instead of creating a CompositeMeterRegistry bean on your own, just define a custom ElasticMeterRegistry bean which is applied the MeterFilter you want and let Spring Boot create one (CompositeMeterRegistry bean) for you.
For MeterFilterReply, ACCEPT will accept the meter immediately, DENY will deny the meter immediately, and NEUTRAL will postpone the decision to next filter(s). Basically meters will be accepted unless there's any DENY.

Spring Cloud Stream Kafka Channel Not Working in Spring Boot Application

I have been attempting to get an inbound SubscribableChannel and outbound MessageChannel working in my spring boot application.
I have successfully setup the kafka channel and tested it successfully.
Furthermore I have create a basic spring boot application that tests adding and receiving things from the channel.
The issue I am having is when I put the equivalent code in the application it belongs in, it appears that the messages never get sent or received. By debugging it's hard to ascertain what's going on but the only thing that looks different to me is the channel-name. In the working impl the channel name is like application.channel in the non working app its localhost:8080/channel.
I was wondering if there is some spring boot configuration blocking or altering the creation of the channels into a different channel source?
Anyone had any similar issues?
application.yml
spring:
datasource:
url: jdbc:h2:mem:dpemail;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
platform: h2
username: hello
password:
driverClassName: org.h2.Driver
jpa:
properties:
hibernate:
show_sql: true
use_sql_comments: true
format_sql: true
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
contentType: application/json
email-out:
destination: email
contentType: application/json
Email
public class Email {
private long timestamp;
private String message;
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
Binding Config
#EnableBinding(EmailQueues.class)
public class EmailQueueConfiguration {
}
Interface
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
Controller
#RestController
#RequestMapping("/queue")
public class EmailQueueController {
private EmailQueues emailQueues;
#Autowired
public EmailQueueController(EmailQueues emailQueues) {
this.emailQueues = emailQueues;
}
#RequestMapping(value = "sendEmail", method = POST)
#ResponseStatus(ACCEPTED)
public void sendToQueue() {
MessageChannel messageChannel = emailQueues.outboundEmails();
Email email = new Email();
email.setMessage("hello world: " + System.currentTimeMillis());
email.setTimestamp(System.currentTimeMillis());
messageChannel.send(MessageBuilder.withPayload(email).setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(#Payload Email email) {
System.out.println("received: " + email.getMessage());
}
}
I'm not sure if one of the inherited configuration projects using Spring-Cloud, Spring-Cloud-Sleuth might be preventing it from working, but even when I remove it still doesnt. But unlike my application that does work with the above code I never see the ConsumeConfig being configured, eg:
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id = consumer-2
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
(This configuration is what I see in my basic Spring Boot application when running the above code and the code works writing and reading from the kafka channel)....
I assume there is some over spring boot configuration from one of the libraries I'm using creating a different type of channel I just cannot find what that configuration is.
What you posted contains a lot of unrelated configuration, so hard to determine if anything gets in the way. Also, when you say "..it appears that the messages never get sent or received.." are there any exceptions in the logs? Also, please state the version of Kafka you're using as well as Spring Cloud Stream.
Now, I did try to reproduce it based on your code (after cleaning up a bit to only leave relevant parts) and was able to successfully send/receive.
My Kafka version is 0.11 and Spring Cloud Stream 2.0.0.
Here is the relevant code:
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
bindings:
email-in:
destination: email
email-out:
destination: email
#SpringBootApplication
#EnableBinding(KafkaQuestionSoApplication.EmailQueues.class)
public class KafkaQuestionSoApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaQuestionSoApplication.class, args);
}
#Bean
public ApplicationRunner runner(EmailQueues emailQueues) {
return new ApplicationRunner() {
#Override
public void run(ApplicationArguments args) throws Exception {
emailQueues.outboundEmails().send(new GenericMessage<String>("Hello"));
}
};
}
#StreamListener(EmailQueues.INPUT)
public void handleEmail(String payload) {
System.out.println("received: " + payload);
}
public interface EmailQueues {
String INPUT = "email-in";
String OUTPUT = "email-out";
#Input(INPUT)
SubscribableChannel inboundEmails();
#Output(OUTPUT)
MessageChannel outboundEmails();
}
}
Okay so after a lot of debugging... I discovered that something is creating a Test Support Binder (how don't know yet) so obviously this is used to not impact add messages to a real channel.
After adding
#SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
The kafka channel configurations have worked and messages are adding.. would be interesting to know what on earth is setting up this test support binder.. I'll find that sucker eventually.

Resources