Kafka starts throwing replication factor error as soon as spring boot app connects to it - spring-boot

Kafka starts flooding logs with the the error
INFO [Admin Manager on Broker 2]: Error processing create topic request CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')]) (kafka.server.ZkAdminManager)
org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
As soon as the spring boot application connects. Here's the Kafka config I pass via application yaml
spring:
cloud:
stream:
instanceIndex: 0
kafka:
default:
producer:
topic:
replication-factor: 1
configuration:
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
spring:
json:
add:
type:
headers: 'false'
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
max:
block:
ms: '100'
consumer:
topic:
replication-factor: 1
configuration:
key:
deserializer: org.apache.kafka.common.serialization.StringDeserializer
spring:
json:
trusted:
packages: com.message
value:
deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
binder:
autoCreateTopics: 'false'
brokers: localhost:19092
replicationFactor: 1
bindings:
consume-in-0:
group: ${category}-${spring.application.name}-consumer-${runtime-env}
destination: alm-tom-${runtime-env}
publish-out-0:
destination: ${category}-${spring.application.name}-${runtime-env}
I don't see any other config to control consumer-offsets topic replication factor

The error is coming from the broker itself. It will only start logging this when you create your first consumer group...
Check the server.properties file for offsets.topic.replication.factor.
Similarly, the transactions topic will have its own replication factor that needs reduced (assuming you're using a newer Kafka version)

Related

Is it possible to create a multi-binder binding with Spring-Cloud-Streams Kafka-Streams to stream from a cluster A and produce to cluster B

I want to create a Kafka-Streams application with Spring-Cloud-Streams which integrates 2 different Kafka Clusters / setups. I tried to implement it using multi-binder configurations as mentioned in the documentation and similar to the examples here:
https://github.com/spring-cloud/spring-cloud-stream-samples/tree/main/multi-binder-samples
Given a simple function like this:
#Bean
public Function<KStream<String, AnalyticsEvent>, KStream<String, UpdateEvent>> analyticsEventProcessor() {
return input -> input
.filter(new AnalyticsPredicate())
.map(new AnalyticsToUpdateEventMapper());
}
In the configuration i'm trying to bind these to different binders.
spring.cloud:
stream:
bindings:
analyticsEventProcessor-in-0:
destination: analytics-events
binder: cluster1-kstream
analyticsEventProcessor-out-0:
destination: update-events
binder: cluster2-kstream
binders:
cluster1-kstream:
type: kstream
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <url cluster1>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster1>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
streams:
binder:
brokers: <url cluster1>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster1>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
cluster2-kstream:
type: kstream
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <url cluster2>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster2>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
streams:
binder:
brokers: <url cluster2>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster2>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
I tried first to run the application completely in a single cluster which worked well. When i run this i always get an error:
2022-08-10 15:28:42.892 WARN 1 --- [-StreamThread-2] org.apache.kafka.clients.NetworkClient : [Consumer clientId=<clientid>-StreamThread-2-consumer, groupId=<group-id>] Error while fetching metadata with correlation id 2 : {analytics-events=TOPIC_AUTHORIZATION_FAILED}
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] org.apache.kafka.clients.Metadata : [Consumer clientId=<client-id>, groupId=<group-id>] Topic authorization failed for topics [analytics-events]
2022-08-10 15:28:42.893 INFO 1 --- [-StreamThread-2] org.apache.kafka.clients.Metadata : [Consumer clientId=<client-id>, groupId=<group-id>] Cluster ID: <cluster-id>
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] c.s.a.a.e.UncaughtExceptionHandler : org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] org.apache.kafka.streams.KafkaStreams : stream-client [<client-id>] Replacing thread in the streams uncaught exception handler
org.apache.kafka.streams.errors.StreamsException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:642) ~[kafka-streams-3.1.1.jar!/:na]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576) ~[kafka-streams-3.1.1.jar!/:na]
Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
I verified the kafka-client certificates they should be correct. I looked at them with keytool, also the password env is set correctly. The consumerConfig also uses the correct broker URL.
Is it possible to use within a KStream Function different kafka clusters with multi-binder for the input for a stream and for the output, is this possible or does it only work with type kafka binders?
In Kafka Streams, you cannot connect to two different clusters in a single application. This means that you cannot receive from a cluster on the inbound and write to another cluster on the outbound when using a Spring Cloud Stream function. See this SO [thread][1] for more details.
You can probably receive from and write to the same cluster in your Kafka Streams function as a workaround. Then, using a regular Kafka binder-based function, simply bridge the output topic to the second cluster. In regular functions (non-Kafka Streams), it can consume from and publish to multiple clusters.
#Bean
public Function<KStream<String, AnalyticsEvent>, KStream<String, UpdateEvent>> analyticsEventProcessor() {
return input -> input
.filter(new AnalyticsPredicate())
.map(new AnalyticsToUpdateEventMapper());
}
This function needs to receive and write to the same cluster. Then you can have another function as below.
#Bean
public Function<?, ?> bridgeFunction() {
....
}
For this function, input is cluster-1 and output is cluster-2.
When using this workaround, make sure to include the regular Kafka binder also as a dependency - spring-cloud-stream-binder-kafka.
Keep in mind that there are disadvantages to this approach, such as adding an extra topic overhead, latency from that etc. However, this is a potential workaround for this use case. For more options, see the SO thread, I mentioned above.
[1]: https://stackoverflow.com/questions/45847690/how-to-connect-to-multiple-clusters-in-a-single-kafka-streams-application

spring cloud stream kafka preventing specific exception from attempt and adding to dlq

I have a sample of cloud stream Kafka with this configuration :
spring:
main:
allow-circular-references: true
cloud:
stream:
bindings:
news-in-0:
consumer:
max-attempts: 5
concurrency: 25
back-off-multiplier: 4.0
back-off-max-interval: 12000
retryableExceptions:
com.mycompany.kafka.DuplicateReverseException: false
com.mycompany.kafka.ApplicationException: true
content-type: application/json
group: sample
destination: newsQueue
newsdlq-in-0:
consumer:
max-attempts: 2
concurrency: 25
back-off-initial-interval: 15000 #360000
content-type: application/json
group: sample
destination: newsQueueDlq
kafka:
binder:
auto-create-topics: true
brokers: localhost:9092
min-partition-count: 25
bindings:
news-in-0:
consumer:
ackEachRecord: true
enableDlq: true
dlqName: newsQueueDlq
autoCommitOnError: true
function:
definition: news;newsdlq
kafka:
consumer:
auto-offset-reset: earliest
listener:
poll-timeout: 2000
idle-event-interval: 30000
application:
name: consumer-cloud-stream
I'm using spring-cloud.version -> 2021.0.3.
In our business, we need to ignore all business exceptions like DuplicateReverseException for the next attempt and don't put it to dlq anymore.
this configuration cover first our requirement but I can't prevent adding this message in dlq.
is this ok that I catch this exception and ignore it? Has spring provided a better way for it?
#Bean
public Consumer<Message<News>> news() {
return message -> {
if (message.getPayload().getSource().equals("1")) {
log.error("1");
throw new DuplicateReverseException();
} else {
log.error("2");
throw new ApplicationException();
}
};
}

How to set the servers property of openapi with Apache Camel?

I am trying to setup a openapi specification and publish the API with Apache Camel and Spring. I tried using restConfiguration, adding the property in the application.yaml, and using the #OpenApiProperty on the app. Everytime the generated yaml reads:
- servers
- url: ""
platform:
# not used
urlrewrite:
enabled: false
token: ${LOCAL_TOKEN:}
apim:
token_ep: ${x/token}
client:
username: ${apim_client_username:}
password: ${apim_client_password:}
consumerKey: ${apim_client_id:}
consumerSecret: ${apim_client_secret:}
endpointsOrchestrated:
server: myServer
emr-endpoint: xxxxxxxx
triggerName: ${PLATFORM_MODULE_ID}
env: dev
domain: ${PLATFORM_MODULE_DOMAIN}
openapi:
title: My Sample api
version: 1.0.0
camel:
dataformat:
jackson:
auto-discover-object-mapper: true
springboot:
tracing: false
rest:
host: myhost.com
port: 8080
spring:
application:
name: aaa
profiles:
active: ${ENV:local}
server:
servlet:
context-path: /
port: ${TOMCAT_PORT:8080}
host: localhost
# MAX HTTP THREADS
tomcat:
threads:
max: ${MAIN_HTTP_THREADS:200}

JHipster test: NoCacheRegionFactoryAvailableException when second level cache is disabled

When I use jhipster generate an app, I disabled the second level cache. However, when I run either "gradle test" or "run as junit test" to test the app, it is failed because the "NoCacheRegionFactoryAvailableException". I have checked the application.yml in directory "src/test/resources/config", and be sure that the second cache is disabled. I do not know why the app is still looking for second-cache. Is there any clue how this happen? or how to disable second level cache completely?
Except the test failure, everything else works well, the app can run successfully.
application.yml in src/test/resources/config
spring:
application:
name: EMS
datasource:
url: jdbc:h2:mem:EMS;DB_CLOSE_DELAY=-1
name:
username:
password:
jpa:
database-platform: com.espion.ems.domain.util.FixedH2Dialect
database: H2
open-in-view: false
show_sql: true
hibernate:
ddl-auto: none
naming-strategy: org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
properties:
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
hibernate.hbm2ddl.auto: validate
data:
elasticsearch:
cluster-name:
cluster-nodes:
properties:
path:
logs: target/elasticsearch/log
data: target/elasticsearch/data
mail:
host: localhost
mvc:
favicon:
enabled: false
thymeleaf:
mode: XHTML
liquibase:
contexts: test
security:
basic:
enabled: false
server:
port: 10344
address: localhost
jhipster:
async:
corePoolSize: 2
maxPoolSize: 50
queueCapacity: 10000
security:
rememberMe:
# security key (this key should be unique for your application, and kept secret)
key: jhfasdhflasdhfasdkfhasdjkf
metrics: # DropWizard Metrics configuration, used by MetricsConfiguration
jmx.enabled: true
swagger:
title: EMS API
description: EMS API documentation
version: 0.0.1
termsOfServiceUrl:
contactName:
contactUrl:
contactEmail:
license:
licenseUrl:
enabled: false
Move src/test/resources/config/application.yml to src/test/resources directory.
You can find that solution from https://github.com/jhipster/generator-jhipster/issues/3730

spring eureka security Batch update failure with HTTP status code 401

I study spring cloud eureka , cloud and they works finely . But after adding security in eureka service , it met some errors .
All the code and errors details is in https://github.com/keryhu/eureka-security
The eureka service application.yml
security:
user:
name: user
password: password
eureka:
client:
registerWithEureka: false
fetchRegistry: false
server:
wait-time-in-ms-when-sync-empty: 0
And The config-service application.java
#SpringBootApplication
#EnableConfigServer
#EnableDiscoveryClient
config-service application.yml
eureka:
client:
registry-fetch-interval-seconds: 5
serviceUrl:
defaultZone: http://user:password#${domain.name:localhost}:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/spring-cloud-samples/config-repo
basedir: target/config
There is errors exported after starting the config-service:
2016-04-10 11:22:39.402 ERROR 80526 --- [get_localhost-3] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 401; discarding 1 replication tasks
2016-04-10 11:22:39.402 WARN 80526 --- [get_localhost-3] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-3 due to permanent error
2016-04-10 11:23:09.411 ERROR 80526 --- [get_localhost-3] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 401; discarding 1 replication tasks
2016-04-10 11:23:09.412 WARN 80526 --- [get_localhost-3] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-3 due to permanent error
2016-04-10 11:23:39.429 ERROR 80526 --- [get_localhost-3] c.n.e.cluster.ReplicationTaskProcessor : Batch update failure with HTTP status code 401; discarding 1 replication tasks
2016-04-10 11:23:39.430 WARN 80526 --- [get_localhost-3] c.n.eureka.util.batcher.TaskExecutors : Discarding 1 tasks of TaskBatchingWorker-target_localhost-3 due to permanent error
SET eureka.client.serviceUrl.defaultZone of eureka-server
http://username:password#localhost:8761/eureka/
I agree with jacky-fan answer.
These are how my working configuration looks like without username and password.
server application.yml
spring:
application:
name: eureka-service
server:
port: 8302
eureka:
client:
register-with-eureka: false
fetch-registry: false
service-url:
defaultZone: http://localhost:8302/eureka/
server:
wait-time-in-ms-when-sync-empty: 0
client application.yml
eureka:
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://localhost:8302/eureka/
instance:
hostname: localhost
spring:
application:
name: my-service
server:
port: 8301

Resources