How to configure the retryOnResultPredicate in resilience4j? - resilience4j

I want to set failAfterMaxAttempts to true to get the MaxRetriesExceededException at the end of max retry. According to the doc we need to set the predicate for retryOnResultPredicate with failAfterMaxAttempts. Can someone help with the example config ? So the retryOnResultPredicate should evaluate the response http status.
resilience4j.retry:
configs:
default:
maxAttempts: 3
waitDuration: 100
failAfterMaxAttempts: true
retryOnResultPredicate:
retryExceptions:
- org.springframework.web.client.HttpServerErrorException
- java.util.concurrent.TimeoutException
- java.io.IOException
ignoreExceptions:
- io.github.robwin.exception.BusinessException

Related

Promtail: "error sending batch, will retry" status=500

when starting the promtail client, it gives an error:
component=client host=loki:3100 msg="error sending batch, will retry" status=500 error="server returned HTTP status 500 Internal Server Error (500): rpc error: code = ResourceExhausted desc = grpc: received message larger than max (6780207 vs. 4194304)"
promtail config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: 'http://loki:3100/loki/api/v1/push'
scrape_configs:
- job_name: server-log
pipeline_stages:
static_configs:
- targets:
- localhost
labels:
job: server-log
__path__: /opt/log/*.log
__path_exclude__: /opt/log/jck_*,*.log
I tried to start changing limits on the server, and run promtail with parameters:
/usr/local/bin/promtail-linux-amd64 -config.file=/etc/config-promtail.yml -server.grpc-max-recv-msg-size-bytes 16777216 -server.grpc-max-concurrent-streams 0 -server.grpc-max-send-msg-size-bytes 16777216 -limit.readline-rate-drop -client.batch-size-bytes 2048576
But judging by what I found, this is a grpc protocol error, or rather, in the size of the transmitted message, where the maximum is 4 mb
Changing the server parameters, not the client, solved the problem
server:
http_listen_port: 3100
grpc_listen_port: 9096
grpc_server_max_recv_msg_size: 8388608
grpc_server_max_send_msg_size: 8388608
limits_config:
ingestion_rate_mb: 15
ingestion_burst_size_mb: 30
per_stream_rate_limit: 10MB
per_stream_rate_limit_burst: 20MB
reject_old_samples: true
reject_old_samples_max_age: 168h
retention_period: 744h
max_query_length: 0h

feign.RetryableException: Unexpected end of file from server executing POST

feign.RetryableException: Unexpected end of file from server executing POST http://conf-management-online/confLoader
at feign.FeignException.errorExecuting(FeignException.java:268)
at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:129)
at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:89)
at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:100)
at com.sun.proxy.$Proxy124.getConfs(Unknown Source)
Caused by: java.net.SocketException: Unexpected end of file from server
at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:866)
at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:689)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1615)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1520)
at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)
at feign.Client$Default.convertResponse(Client.java:109)
at feign.Client$Default.execute(Client.java:105)
Already add prefer-ip-address and retry on my config
Spring.cloud.consul.discovery.prefer-ip-address: True
retry:
enabled: true
max-attempts: 20
max-interval: 2000
initial-interval: 1000
myapplication
Springboot cloud + openfeign + consul
spring-cloud-starter-openfeign:3.1.3
spring-retry: 1.3.1
Problem is how can I avoid this error and make sure That Feign retires using the ip:port?

producer configuration of kafkamirrormaker2 strimzi

i'm using kafka mirrormaker2 in order to copy topic content from Kafka's cluster A to Kafka's cluster B.
i'm getting the following errors
kafka.MirrorSourceConnector-0} flushing 197026 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [SourceTaskOffsetCommitter-1]
where do i need to add the producer configuration ?
i tried to put it in 2 places but seems like it's not working:
kafkaMirrorMaker2:
spec:
config:
batch.size: 9000
offset.flush.timeout.ms: 3000
producer.buffer.memory: 5000
instances:
- enabled: true
name: mirror-maker
replicas: 3
enableMetrics: true
targetCluster: target-kafka
producer:
config:
producer.batch.size: 300000
batch.size: 9000
offset.flush.timeout.ms: 3000
producer.buffer.memory: 5000
clusters:
source-kafka:
bootstrapServers: sourcekafka:9092
target-kafka:
bootstrapServers: targetkafka:9092
mirrors:
- sourceCluster: source-kafka
targetCluster: target-kafka
topicsPattern: "mytopic"
groupsPattern: "my-replicator"
offset.flush.timeout.ms: 3000
producer.buffer.memory: 5000
sourceConnector:
config:
max.poll.record: 2000
tasks.max: 9
consumer.auto.offset.reset: latest
offset.flush.timeout.ms: 3000
consumer.request.timeout.ms: 15000
producer.batch.size: 65536
heartbeatConnector:
config:
tasks.max: 9
heartbeats.topic.replication.factor: 3
checkpointConnector:
config:
tasks.max: 9
checkpoints.topic.replication.factor: 3
resources:
requests:
memory: 2Gi
cpu: 3
limits:
memory: 4Gi
cpu: 3
any idea where to locate the producer configuration ?

fetch-min-size & max-poll-records sping kafka configurations does not work as expected

I am working on an Spring boot application with spring kafka which listens to a single topic of kafka and then segregates the records for respective categories, creates a json file out of it and uploads it to AWS S3.
I am receiving huge data volumes in Kafka topics and I need to make sure the json files are chunked appropriately huge to limit the number of json uploaded to S3.
Below is my application.yml configuration for kafka consumer.
spring:
kafka:
consumer:
group-id: newton
auto-offset-reset: earliest
fetch-max-wait:
seconds: 1
fetch-min-size: 500000000
max-poll-records: 50000000
value-deserializer: com.forwarding.application.consumer.model.deserializer.MeasureDeserializer
I have created a listener for reading the topic continuously.
Even with the above configuration, I am receiving records in console as follows:
2019-03-27T15:25:56.02+0530 [APP/PROC/WEB/0] OUT 2019-03-27 09:55:56.024 INFO 8 --- [ntainer#0-0-C-1] c.s.n.f.a.s.impl.ConsumerServiceImpl : Time taken(ms) 56. No Of measures: 60
2019-03-27T15:25:56.21+0530 [APP/PROC/WEB/2] OUT 2019-03-27 09:55:56.210 INFO 8 --- [ntainer#0-0-C-1] c.s.n.f.a.s.impl.ConsumerServiceImpl : Time taken(ms) 80. No Of measures: 96
2019-03-27T15:25:56.56+0530 [APP/PROC/WEB/0] OUT 2019-03-27 09:55:56.560 INFO 8 --- [ntainer#0-0-C-1] c.s.n.f.a.s.impl.ConsumerServiceImpl : Time taken(ms) 76. No Of measures: 39
2019-03-27T15:25:56.73+0530 [APP/PROC/WEB/2] OUT 2019-03-27 09:55:56.732 INFO 8 --- [ntainer#0-0-C-1] c.s.n.f.a.s.impl.ConsumerServiceImpl : Time taken(ms) 77. No Of measures: 66
Can anyone please let me know what can be configured to get the received records as per the configuration in application.yml?
I just copied your configuration (except the max wait - see the syntax I used) and it worked fine...
spring:
kafka:
consumer:
group-id: newton
auto-offset-reset: earliest
fetch-max-wait: 1s
fetch-min-size: 500000000
max-poll-records: 50000000
2019-03-27 13:43:55.454 INFO 98982 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 1000
fetch.min.bytes = 500000000
group.id = newton
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 50000000
...
You set arbitrary properties that are not directly supported as boot properties, using the ...properties property.
e.g.
spring:
kafka:
consumer:
properties:
max.poll.interval.ms: 300000
or
spring:
kafka:
consumer:
properties:
max:
poll:
interval:
ms: 300000
The documentation is here.
The properties supported by auto configuration are shown in Appendix A, Common application properties. Note that, for the most part, these properties (hyphenated or camelCase) map directly to the Apache Kafka dotted properties. Refer to the Apache Kafka documentation for details.
The first few of these properties apply to all components (producers, consumers, admins, and streams) but can be specified at the component level if you wish to use different values. Apache Kafka designates properties with an importance of HIGH, MEDIUM, or LOW. Spring Boot auto-configuration supports all HIGH importance properties, some selected MEDIUM and LOW properties, and any properties that do not have a default value.
Only a subset of the properties supported by Kafka are available directly through the KafkaProperties class. If you wish to configure the producer or consumer with additional properties that are not directly supported, use the following properties:
spring.kafka.properties.prop.one=first
spring.kafka.admin.properties.prop.two=second
spring.kafka.consumer.properties.prop.three=third
spring.kafka.producer.properties.prop.four=fourth
spring.kafka.streams.properties.prop.five=fifth

Storm HiveBolt missing records due to batching of Hive transactions

To store the processed records I am using HiveBolt in Storm topology with following arguments.
- id: "MyHiveOptions"
className: "org.apache.storm.hive.common.HiveOptions"
- "${metastore.uri}" # metaStoreURI
- "${hive.database}" # databaseName
- "${hive.table}" # tableName
configMethods:
- name: "withTxnsPerBatch"
args:
- 2
- name: "withBatchSize"
args:
- 100
- name: "withIdleTimeout"
args:
- 2 #default value 0
- name: "withMaxOpenConnections"
args:
- 200 #default value 500
- name: "withCallTimeout"
args:
- 30000 #default value 10000
- name: "withHeartBeatInterval"
args:
- 240 #default value 240
There are missing transaction in Hive due to batch no being completed and records are flushed. (For example: 1330 records are processed but only 1200 records are in hive. 130 records missing.)
How can I overcome this situation? How can I fill the batch so that the transaction is triggered and the records are stored in hive.
Topology : Kafka-Spout --> DataProcessingBolt
DataProcessingBolt -->HiveBolt (Sink)
DataProcessingBolt -->JdbcBolt (Sink)

Resources