spring cloud stream kafka preventing specific exception from attempt and adding to dlq - spring

I have a sample of cloud stream Kafka with this configuration :
spring:
main:
allow-circular-references: true
cloud:
stream:
bindings:
news-in-0:
consumer:
max-attempts: 5
concurrency: 25
back-off-multiplier: 4.0
back-off-max-interval: 12000
retryableExceptions:
com.mycompany.kafka.DuplicateReverseException: false
com.mycompany.kafka.ApplicationException: true
content-type: application/json
group: sample
destination: newsQueue
newsdlq-in-0:
consumer:
max-attempts: 2
concurrency: 25
back-off-initial-interval: 15000 #360000
content-type: application/json
group: sample
destination: newsQueueDlq
kafka:
binder:
auto-create-topics: true
brokers: localhost:9092
min-partition-count: 25
bindings:
news-in-0:
consumer:
ackEachRecord: true
enableDlq: true
dlqName: newsQueueDlq
autoCommitOnError: true
function:
definition: news;newsdlq
kafka:
consumer:
auto-offset-reset: earliest
listener:
poll-timeout: 2000
idle-event-interval: 30000
application:
name: consumer-cloud-stream
I'm using spring-cloud.version -> 2021.0.3.
In our business, we need to ignore all business exceptions like DuplicateReverseException for the next attempt and don't put it to dlq anymore.
this configuration cover first our requirement but I can't prevent adding this message in dlq.
is this ok that I catch this exception and ignore it? Has spring provided a better way for it?
#Bean
public Consumer<Message<News>> news() {
return message -> {
if (message.getPayload().getSource().equals("1")) {
log.error("1");
throw new DuplicateReverseException();
} else {
log.error("2");
throw new ApplicationException();
}
};
}

Related

Kafka starts throwing replication factor error as soon as spring boot app connects to it

Kafka starts flooding logs with the the error
INFO [Admin Manager on Broker 2]: Error processing create topic request CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')]) (kafka.server.ZkAdminManager)
org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
As soon as the spring boot application connects. Here's the Kafka config I pass via application yaml
spring:
cloud:
stream:
instanceIndex: 0
kafka:
default:
producer:
topic:
replication-factor: 1
configuration:
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
spring:
json:
add:
type:
headers: 'false'
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
max:
block:
ms: '100'
consumer:
topic:
replication-factor: 1
configuration:
key:
deserializer: org.apache.kafka.common.serialization.StringDeserializer
spring:
json:
trusted:
packages: com.message
value:
deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
binder:
autoCreateTopics: 'false'
brokers: localhost:19092
replicationFactor: 1
bindings:
consume-in-0:
group: ${category}-${spring.application.name}-consumer-${runtime-env}
destination: alm-tom-${runtime-env}
publish-out-0:
destination: ${category}-${spring.application.name}-${runtime-env}
I don't see any other config to control consumer-offsets topic replication factor
The error is coming from the broker itself. It will only start logging this when you create your first consumer group...
Check the server.properties file for offsets.topic.replication.factor.
Similarly, the transactions topic will have its own replication factor that needs reduced (assuming you're using a newer Kafka version)

Is it possible to create a multi-binder binding with Spring-Cloud-Streams Kafka-Streams to stream from a cluster A and produce to cluster B

I want to create a Kafka-Streams application with Spring-Cloud-Streams which integrates 2 different Kafka Clusters / setups. I tried to implement it using multi-binder configurations as mentioned in the documentation and similar to the examples here:
https://github.com/spring-cloud/spring-cloud-stream-samples/tree/main/multi-binder-samples
Given a simple function like this:
#Bean
public Function<KStream<String, AnalyticsEvent>, KStream<String, UpdateEvent>> analyticsEventProcessor() {
return input -> input
.filter(new AnalyticsPredicate())
.map(new AnalyticsToUpdateEventMapper());
}
In the configuration i'm trying to bind these to different binders.
spring.cloud:
stream:
bindings:
analyticsEventProcessor-in-0:
destination: analytics-events
binder: cluster1-kstream
analyticsEventProcessor-out-0:
destination: update-events
binder: cluster2-kstream
binders:
cluster1-kstream:
type: kstream
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <url cluster1>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster1>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
streams:
binder:
brokers: <url cluster1>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster1>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster1/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER1_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster1/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER1_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
cluster2-kstream:
type: kstream
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <url cluster2>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster2>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
streams:
binder:
brokers: <url cluster2>:9093
configuration:
security.protocol: SSL
schema.registry.url: <schema-registry-url-cluster2>
schema.registry.ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
schema.registry.ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
schema.registry.ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
schema.registry.ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.truststore.location: /mnt/secrets/cluster2/truststore.jks
ssl.truststore.password: ${SPRING_KAFKA_SSL_CLUSTER2_TRUST-STORE-PASSWORD}
ssl.truststore.type: JKS
ssl.keystore.location: /mnt/secrets/cluster2/keystore.jks
ssl.keystore.password: ${SPRING_KAFKA_SSL_CLUSTER2_KEY-STORE-PASSWORD}
ssl.keystore.type: JKS
ssl.enabled.protocols: TLSv1.2
I tried first to run the application completely in a single cluster which worked well. When i run this i always get an error:
2022-08-10 15:28:42.892 WARN 1 --- [-StreamThread-2] org.apache.kafka.clients.NetworkClient : [Consumer clientId=<clientid>-StreamThread-2-consumer, groupId=<group-id>] Error while fetching metadata with correlation id 2 : {analytics-events=TOPIC_AUTHORIZATION_FAILED}
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] org.apache.kafka.clients.Metadata : [Consumer clientId=<client-id>, groupId=<group-id>] Topic authorization failed for topics [analytics-events]
2022-08-10 15:28:42.893 INFO 1 --- [-StreamThread-2] org.apache.kafka.clients.Metadata : [Consumer clientId=<client-id>, groupId=<group-id>] Cluster ID: <cluster-id>
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] c.s.a.a.e.UncaughtExceptionHandler : org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
2022-08-10 15:28:42.893 ERROR 1 --- [-StreamThread-2] org.apache.kafka.streams.KafkaStreams : stream-client [<client-id>] Replacing thread in the streams uncaught exception handler
org.apache.kafka.streams.errors.StreamsException: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:642) ~[kafka-streams-3.1.1.jar!/:na]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576) ~[kafka-streams-3.1.1.jar!/:na]
Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [analytics-events]
I verified the kafka-client certificates they should be correct. I looked at them with keytool, also the password env is set correctly. The consumerConfig also uses the correct broker URL.
Is it possible to use within a KStream Function different kafka clusters with multi-binder for the input for a stream and for the output, is this possible or does it only work with type kafka binders?
In Kafka Streams, you cannot connect to two different clusters in a single application. This means that you cannot receive from a cluster on the inbound and write to another cluster on the outbound when using a Spring Cloud Stream function. See this SO [thread][1] for more details.
You can probably receive from and write to the same cluster in your Kafka Streams function as a workaround. Then, using a regular Kafka binder-based function, simply bridge the output topic to the second cluster. In regular functions (non-Kafka Streams), it can consume from and publish to multiple clusters.
#Bean
public Function<KStream<String, AnalyticsEvent>, KStream<String, UpdateEvent>> analyticsEventProcessor() {
return input -> input
.filter(new AnalyticsPredicate())
.map(new AnalyticsToUpdateEventMapper());
}
This function needs to receive and write to the same cluster. Then you can have another function as below.
#Bean
public Function<?, ?> bridgeFunction() {
....
}
For this function, input is cluster-1 and output is cluster-2.
When using this workaround, make sure to include the regular Kafka binder also as a dependency - spring-cloud-stream-binder-kafka.
Keep in mind that there are disadvantages to this approach, such as adding an extra topic overhead, latency from that etc. However, this is a potential workaround for this use case. For more options, see the SO thread, I mentioned above.
[1]: https://stackoverflow.com/questions/45847690/how-to-connect-to-multiple-clusters-in-a-single-kafka-streams-application

org.axonframework.commandhandling.distributed.CommandDispatchException:An erroroccurredwhiletryingtodispatchacommandontheDistributedCommandBus:404null

***Facing below issue:
org.axonframework.commandhandling.distributed.CommandDispatchException: An error occurred while trying to dispatch a command on the DistributedCommandBus: 404 null
The application is deployed on open shift platform using K8s.
This issue is coming while increasing the number of pods to greater than 1 on a specific environment.***
Below is the configuration file:-
eventbus:
type: jms
eventBus:
server:
selector:
topicName: EventBus
queueName: EventBus.management-inventory-api
eventstore:
jdbc:
validateOnly: true
endpoints:
health:
sensitive: false
jmx:
uniqueNames: true
liquibase:
enabled: false
server:
port: 8087
http:
port: 8088
ssl:
enabled: false
keyStore: ${certs.path}/inventorydomain.jks
keyStoreType: JKS
trustStore: ${certs.path}/inventorydomain_truststore.jks
trustStoreType: JKS
keyAlias: inventorydomain
spring:
jmx:
default-domain: com.inventory.domain.inventory-command
jpa:
hibernate:
ddlAuto: validate
show-sql: true
messages:
basename: errors,platform-errors
autoconfigure.exclude: |
org.axonframework.boot.autoconfig.JpaAutoConfiguration,
org.axonframework.boot.autoconfig.AxonAutoConfiguration
annotation:
eventHandler:
lookupPrefix: com.consumercard.command.listener.external
eventMapping:
lookupPrefix: com.consumercard.command.event
jmsBrokers:
producer:
brokerUrl: vm://localhost?broker.persistent=false
consumer:
brokerUrl: vm://localhost?broker.persistent=false
spring.cloud.config.discovery.enabled: false
eureka:
instance:
preferIpAddress: true
nonSecurePort: ${server.http.port}
securePort: ${server.port}
nonSecurePortEnabled: true
securePortEnabled: false
metadata-map:
zone: zone-1
client:
enabled: false
serviceUrl:
defaultZone: https://localhost:8448/eureka
logging:
level: DEBUG
basicAuth.enabled: true
rest:
client:
connection:
defaultMaxPerRoute: 50
maxTotal: 100
connectionTimeout: 10000
readTimeout: 30000
NOTE:-
This was an additional property added:-
eureka:
instance:
metadata-map:
zone: zone-1

How to set the servers property of openapi with Apache Camel?

I am trying to setup a openapi specification and publish the API with Apache Camel and Spring. I tried using restConfiguration, adding the property in the application.yaml, and using the #OpenApiProperty on the app. Everytime the generated yaml reads:
- servers
- url: ""
platform:
# not used
urlrewrite:
enabled: false
token: ${LOCAL_TOKEN:}
apim:
token_ep: ${x/token}
client:
username: ${apim_client_username:}
password: ${apim_client_password:}
consumerKey: ${apim_client_id:}
consumerSecret: ${apim_client_secret:}
endpointsOrchestrated:
server: myServer
emr-endpoint: xxxxxxxx
triggerName: ${PLATFORM_MODULE_ID}
env: dev
domain: ${PLATFORM_MODULE_DOMAIN}
openapi:
title: My Sample api
version: 1.0.0
camel:
dataformat:
jackson:
auto-discover-object-mapper: true
springboot:
tracing: false
rest:
host: myhost.com
port: 8080
spring:
application:
name: aaa
profiles:
active: ${ENV:local}
server:
servlet:
context-path: /
port: ${TOMCAT_PORT:8080}
host: localhost
# MAX HTTP THREADS
tomcat:
threads:
max: ${MAIN_HTTP_THREADS:200}

Why is there a delay in routing when using Zuul + Ribbon with Eureka turned off?

When Zuul routes to microservice it takes around 5 ms but on 30th second it adds 100ms + delay.
I added ribbon.eureka.ServerListRefreshInterval=60000. Now I see it refreshing its list of servers every minute but still on 30th sec the delay happens.
Can someone please tell me what's happening at 30th sec?
My Zuul configuration:
spring:
application:
name: xxxx
profiles:
active: default
cloud:
config:
failFast: true
security:
enabled: false
main:
banner-mode: 'off'
eureka:
client:
enabled: false
registerWithEureka: false
fetchRegistry: false
zuul:
host:
connect-timeout-millis: 60000 # starting the connection
socket-timeout-millis: 60000 # monitor the continuous incoming data flow
sensitiveHeaders: Cookie,Set-Cookie
ignoredServices: '*'
routes:
auth:
path: /xxx/xx/**
stripPrefix: false
#url: http://localhost:9003/
xx:
path: /xx/xx/**
stripPrefix: false
#url: http://localhost:9002/
hystrix:
command:
default:
execution:
timeout:
enabled: false
ribbon:
ReadTimeout: 60000
ConnectTimeout: 120000
eureka:
enabled: false
security:
ignored: /**
basic:
enabled: false
management:
security:
enabled: false
xx:
ribbon:
eureka:
enabled: false
ServerListRefreshInterval: 60000

Resources