I am using Spring with kafka Streams. I am trying to simulate an error scenario. As you can see from the code, if I pass the input value as 0 to one of the input topics, I will get an divide by 0 exception.
I have wrapped the error causing section in a try-catch block assuming that the error will be caught by the catch block.
#Component
#Slf4j
public class Topology {
private static final Serde<String> STRING_SERDE = Serdes.String();
#Autowired
void KStreamsTopology(StreamsBuilder streamsBuilder) {
KStream<String, String> messageStream = streamsBuilder
.stream(List.of("quickstart-events", "sample-input"), Consumed.with(STRING_SERDE, STRING_SERDE).withName("my-inputs"));
try {
messageStream
.peek((k, v) -> System.out.println("Input Key: " + k + ", value: " + v))
.mapValues(a -> getAnInt(Integer.parseInt(a)))
.foreach((k, v) -> System.out.println("output Key: " + k + ", Output value: " + v));
} catch (Exception exception) {
log.error("Exceptions occurred in my Topology:: " + exception.getMessage());
}
}
private int getAnInt(Integer value) {
return 10 / value;
}
}
But the error is not caught. Instead, the below exception is thrown and the application stops.
2022-08-10 23:26:27.218 INFO 40343 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [kafka-streams-poc-de132014-fb30-4c10-b5d5-7b190c3e38db-StreamThread-1] Shutdown complete
Exception in thread "kafka-streams-poc-de132014-fb30-4c10-b5d5-7b190c3e38db-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_0, processor=my-inputs, topic=sample-input, partition=0, offset=10, stacktrace=java.lang.ArithmeticException: / by zero
at com.techopact.kafkastreamspoc.topology.Topology.getAnInt(Topology.java:37)
at com.techopact.kafkastreamspoc.topology.Topology.lambda$KStreamsTopology$1(Topology.java:28)
at org.apache.kafka.streams.kstream.internals.AbstractStream.lambda$withKey$2(AbstractStream.java:111)
at org.apache.kafka.streams.kstream.internals.KStreamMapValues$KStreamMapProcessor.process(KStreamMapValues.java:41)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:146)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:253)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:232)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:191)
at org.apache.kafka.streams.kstream.internals.KStreamPeek$KStreamPeekProcessor.process(KStreamPeek.java:42)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:146)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:253)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:232)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:191)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:84)
at org.apache.kafka.streams.processor.internals.StreamTask.lambda$process$1(StreamTask.java:731)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:809)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:731)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1296)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:784)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:604)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:758)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1296)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:784)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:604)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576)
Caused by: java.lang.ArithmeticException: / by zero
at com.techopact.kafkastreamspoc.topology.Topology.getAnInt(Topology.java:37)
at com.techopact.kafkastreamspoc.topology.Topology.lambda$KStreamsTopology$1(Topology.java:28)
at org.apache.kafka.streams.kstream.internals.AbstractStream.lambda$withKey$2(AbstractStream.java:111)
at org.apache.kafka.streams.kstream.internals.KStreamMapValues$KStreamMapProcessor.process(KStreamMapValues.java:41)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:146)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:253)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:232)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:191)
at org.apache.kafka.streams.kstream.internals.KStreamPeek$KStreamPeekProcessor.process(KStreamPeek.java:42)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:146)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:253)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:232)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:191)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:84)
at org.apache.kafka.streams.processor.internals.StreamTask.lambda$process$1(StreamTask.java:731)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:809)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:731)
... 4 more
2022-08-10 23:31:26.921 INFO 40343 --- [90c3e38db-admin] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=kafka-streams-poc-de132014-fb30-4c10-b5d5-7b190c3e38db-admin] Node -1 disconnected.
2022-08-10 23:36:27.007 INFO 40343 --- [90c3e38db-admin] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=kafka-streams-poc-de132014-fb30-4c10-b5d5-7b190c3e38db-admin] Node 0 disconnected.
I am aware that this is not the right way to handle exceptions in Kafka Stream application. I see a nice detailed article here. But I am still curious as to why the exception is not caught by the catch block.
I added some print statements to the code.
#Component
#Slf4j
public class Topology {
private static final Serde<String> STRING_SERDE = Serdes.String();
#Autowired
void KStreamsTopology(StreamsBuilder streamsBuilder) {
KStream<String, String> messageStream = streamsBuilder
.stream(List.of("sample-input"),
Consumed.with(STRING_SERDE, STRING_SERDE).withName("my-inputs"));
try {
System.out.println("Start.... " + Thread.currentThread().getName());
messageStream
.peek((k, v) -> System.out.println("Input Key: " + k + ", value: " + v))
.mapValues(a -> getAnInt(Integer.parseInt(a)))
.foreach((k, v) -> System.out.println("output Key: " + k + ", Output value: " + v));
} catch (Exception exception) {
log.error("Exceptions occurred in my Topology:: " + exception.getMessage());
}
}
private int getAnInt(Integer value) {
System.out.println("Inside getInt:: " + Thread.currentThread().getName());
return 10 / value;
}
}
I saw these lines in the output.
Start.... main
Inside getInt:: kafka-streams-poc-45acf83e-411f-4a21-833e-c7d9ccf1fe90-StreamThread-1
As Artem Bilan pointed out, the Kafka stream nodes are run in a different thread other than the main thread
Related
I have a springboot Kafka Consumer & Producer. The consumer is expected to read data from topic 1 by 1, process(time consuming) it & write it to another topic and then manually commit the offset.
In order to avoid rebalancing, I have tried to call pause() and resume() on KafkaContainer but the consumer is always running & never responds to pause() call, tried it even with a while loop and faced no success(unable to pause the consumer). KafkaListenerEndpointRegistry is Autowired.
Springboot version = 2.6.9, spring-kafka version = 2.8.7
#KafkaListener(id = "c1", topics = "${app.topics.topic1}", containerFactory = "listenerContainerFactory1")
public void poll(ConsumerRecord<String, String> record, Acknowledgment ack) {
log.info("Received Message by consumer of topic1: " + value);
String result = process(record.value());
producer.sendMessage(result + " topic2");
log.info("Message sent from " + topicIn + " to " + topicOut);
ack.acknowledge();
log.info("Offset committed by consumer 1");
}
private String process(String value) {
try {
pauseConsumer();
// Perform time intensive network IO operations
resumeConsumer();
} catch (InterruptedException e) {
log.error(e.getMessage());
}
return value;
}
private void pauseConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isRunning()) {
log.info("Attempting to pause consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).pause();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
private void resumeConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isContainerPaused() || registry.getListenerContainer("c1").isPauseRequested()) {
log.info("Attempting to resume consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).resume();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
Am I missing something? Could someone please guide me with the right way of achieving the required behaviour?
You are running the process() method on the listener thread so pause/resume will not have any effect; the pause only takes place when the listener thread exits the listener method (and after it has processed all the records received by the previous poll).
The next version (2.9), due later this month, has a new property pauseImmediate, which causes the pause to take effect after the current record is processed.
You can try like this. This work for me
public class kafkaConsumer {
public void run(String topicName) {
try {
Consumer<String, String> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Collections.singleton(topicName));
while (true) {
try {
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(80000));
for (TopicPartition partition : consumerRecords.partitions()) {
List<ConsumerRecord<String, String>> partitionRecords = consumerRecords.records(partition);
for (ConsumerRecord<String, String> record : partitionRecords) {
kafkaEvent = record.value();
consumer.pause(consumer.assignment());
/** Implement Your Business Logic Here **/
Once your processing done
consumer.resume(consumer.assignment());
try {
consumer.commitSync();
} catch (CommitFailedException e) {
}
}
}
} catch (Exception e) {
continue;
}
}
} catch (Exception e) {
}
}
When trying to subscribe to non-existing topic with #KafkaListener, it logs a warning:
2021-04-22 13:03:56.710 WARN 20188 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-gg-2, groupId=gg] Error while fetching metadata with correlation id 174 : {not_exist=UNKNOWN_TOPIC_OR_PARTITION}
How to detect and handle this? I tried errorHandler, it isn't got called:
#KafkaListener(topics = "not_exist", groupId = "gg", errorHandler = "onError")
public void receive(String m) {
log.info("Rcd: " + m);
}
...
#Bean
public KafkaListenerErrorHandler onError() {
return new KafkaListenerErrorHandler() {
#Override
public Object handleError(Message<?> message, ListenerExecutionFailedException e) {
log.error("handleError Error: {} message: {}", e.toString(), message);
return message;
}
};
}
I think you can find the answer in org/springframework/kafka/listener/KafkaListenerErrorHandler.java
* #return the return value is ignored unless the annotated method has a {#code #SendTo} annotation.
I want which records were skipped during reading and insert that in "EXIT_MESSAGE" of batch_step_execution. So, this is my "SkipListener" class.
public class IntroductionSkipListener {
private static final Logger LOG = Logger.getLogger(IntroductionSkipListener.class);
#OnSkipInRead
public void onSkipInRead(Throwable t) {
LOG.error("Item was skipped in read due to: " + t.getMessage());
}
#OnSkipInWrite
public void onSkipInWrite(Introduction item, Throwable t) {
LOG.error("Item " + item + " was skipped in write due to : " + t.getMessage());
}
#OnSkipInProcess
public void onSkipInProcess(Introduction item, Throwable t) {
LOG.error("Item " + item + " was skipped in process due to: " + t.getMessage());
}
I want the following throwable message to be saved in the table.
2019-01-30 15:37:53.339 ERROR 10732 --- [nio-8080-exec-1] c.s.a.b.config.IntroductionSkipListener : Item was skipped in read due to: Parsing error at line: 2 in resource=[URL [file:E:/survey-data-repo/Web/Source_Code/survey_v0.12/survey-data/1546580364630/1.Introduction.csv]], input=[fa1e9a60603145e3a1ec67d513c594cb,as,1,4,4,New Salmamouth,Chauncey Skyway,53566,27.216799:75.598685,Aglae Rice,580242662,2,12/2/2001 10:01]
And make the EXIT_STATUS as "SKIPPED" or something like that. Is it possible ?
P.S. I am new in Spring Batch.
Yes, you can cast the Throwable parameter of your onSkipInRead method to FlatFileParseException and use that exception to get the raw line that was skipped and its number.
Now in order to change the ExitStatus to SKIPPED, you need to add a callback method after the step (#AfterStep) and set the exit status if some lines were skipped, something like:
#AfterStep
public ExitStatus checkForSkips(StepExecution stepExecution) {
if (stepExecution.getSkipCount() > 0) {
return new ExitStatus("SKIPPED");
}
else {
return null;
}
}
You can find an example in the SkipCheckingListener.
I'm using a WebClient object to send Http Post request to a server.
It's sending a huge amount of requests quite rapidly (there is about 4000 messages in a QueueChannel). The problem is... it seems the server can't respond fast enough... so I'm getting a lot of server error 500 and connexion closed prematurely.
Is there a way to limit the number of request per seconds ? Or limit the number of threads it's using ?
EDIT :
The Message endpoint processe message in a QueueChannel :
#MessageEndpoint
public class CustomServiceActivator {
private static final Logger logger = LogManager.getLogger();
#Autowired
IHttpService httpService;
#ServiceActivator(
inputChannel = "outputFilterChannel",
outputChannel = "outputHttpServiceChannel",
poller = #Poller( fixedDelay = "1000" )
)
public void processMessage(Data data) {
httpService.push(data);
try {
Thread.sleep(20);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
The WebClient service class :
#Service
public class HttpService implements IHttpService {
private static final String URL = "http://www.blabla.com/log";
private static final Logger logger = LogManager.getLogger();
#Autowired
WebClient webClient;
#Override
public void push(Data data) {
String body = constructString(data);
Mono<ResponseEntity<Response>> res = webClient.post()
.uri(URL + getLogType(data))
.contentLength(body.length())
.contentType(MediaType.APPLICATION_JSON)
.syncBody(body)
.exchange()
.flatMap(response -> response.toEntity(Response.class));
res.subscribe(new Consumer<ResponseEntity<Response>>() { ... });
}
}
Resilience4j has excellent support for non-blocking rate limiting with Project Reactor.
Required dependencies (beside Spring WebFlux):
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-reactor</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-ratelimiter</artifactId>
<version>1.6.1</version>
</dependency>
Example:
import io.github.resilience4j.ratelimiter.RateLimiter;
import io.github.resilience4j.ratelimiter.RateLimiterConfig;
import io.github.resilience4j.reactor.ratelimiter.operator.RateLimiterOperator;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import java.time.Duration;
import java.time.LocalDateTime;
import java.util.concurrent.atomic.AtomicInteger;
public class WebClientRateLimit
{
private static final AtomicInteger COUNTER = new AtomicInteger(0);
private final WebClient webClient;
private final RateLimiter rateLimiter;
public WebClientRateLimit()
{
this.webClient = WebClient.create();
// enables 3 requests every 5 seconds
this.rateLimiter = RateLimiter.of("my-rate-limiter",
RateLimiterConfig.custom()
.limitRefreshPeriod(Duration.ofSeconds(5))
.limitForPeriod(3)
.timeoutDuration(Duration.ofMinutes(1)) // max wait time for a request, if reached then error
.build());
}
public Mono<?> call()
{
return webClient.get()
.uri("https://jsonplaceholder.typicode.com/todos/1")
.retrieve()
.bodyToMono(String.class)
.doOnSubscribe(s -> System.out.println(COUNTER.incrementAndGet() + " - " + LocalDateTime.now()
+ " - call triggered"))
.transformDeferred(RateLimiterOperator.of(rateLimiter));
}
public static void main(String[] args)
{
WebClientRateLimit webClientRateLimit = new WebClientRateLimit();
long start = System.currentTimeMillis();
Flux.range(1, 16)
.flatMap(x -> webClientRateLimit.call())
.blockLast();
System.out.println("Elapsed time in seconds: " + (System.currentTimeMillis() - start) / 1000d);
}
}
Example output:
1 - 2020-11-30T15:44:01.575003200 - call triggered
2 - 2020-11-30T15:44:01.821134 - call triggered
3 - 2020-11-30T15:44:01.823133100 - call triggered
4 - 2020-11-30T15:44:04.462353900 - call triggered
5 - 2020-11-30T15:44:04.462353900 - call triggered
6 - 2020-11-30T15:44:04.470399200 - call triggered
7 - 2020-11-30T15:44:09.461199100 - call triggered
8 - 2020-11-30T15:44:09.463157 - call triggered
9 - 2020-11-30T15:44:09.463157 - call triggered
11 - 2020-11-30T15:44:14.461447700 - call triggered
10 - 2020-11-30T15:44:14.461447700 - call triggered
12 - 2020-11-30T15:44:14.461447700 - call triggered
13 - 2020-11-30T15:44:19.462098200 - call triggered
14 - 2020-11-30T15:44:19.462098200 - call triggered
15 - 2020-11-30T15:44:19.468059700 - call triggered
16 - 2020-11-30T15:44:24.462615 - call triggered
Elapsed time in seconds: 25.096
Docs: https://resilience4j.readme.io/docs/examples-1#decorate-mono-or-flux-with-a-ratelimiter
Question Limiting rate of requests with Reactor provides two answrers (one in comment)
zipWith another flux that acts as rate limiter
.zipWith(Flux.interval(Duration.of(1, ChronoUnit.SECONDS)))
just delay each web request
use delayElements function
edit: answer below is valid for blocking RestTemplate but do not really fit well into reactive pattern.
WebClient does not have ability to limit request, but you could easily add this feature using composition.
You may throttle your client externally using RateLimiter from Guava/
(https://google.github.io/guava/releases/19.0/api/docs/index.html?com/google/common/util/concurrent/RateLimiter.html)
In this tutorial http://www.baeldung.com/guava-rate-limiter you will find how to use Rate limiter in blocking way, or with timeouts.
I would decorate all calls that need to be throttled in separate class that
limits number of calls per second
performs actual web call using WebClient
I hope I'm not late for the party. Anyway, limiting the rate of the request is just one of the problem I faced a week ago as I was creating a crawler. Here are the issues:
I have to do a recursive, paginated sequential request. Pagination parameters are included in the API that I'm calling for.
Once a response is received, pause for 1 second before doing the next request.
For certain errors encountered, do a retry
On retry, pause for certain seconds
Here's the solution:
private Flux<HostListResponse> sequentialCrawl() {
AtomicLong pageNo = new AtomicLong(2);
// Solution for #1 - Flux.expand
return getHosts(1)
.doOnRequest(value -> LOGGER.info("Start crawling."))
.expand(hostListResponse -> {
final long totalPages = hostListResponse.getData().getTotalPages();
long currPageNo = pageNo.getAndIncrement();
if (currPageNo <= totalPages) {
LOGGER.info("Crawling page " + currPageNo + " of " + totalPages);
// Solution for #2
return Mono.just(1).delayElement(Duration.ofSeconds(1)).then(
getHosts(currPageNo)
);
}
return Flux.empty();
})
.doOnComplete(() -> LOGGER.info("End of crawling."));
}
private Mono<HostListResponse> getHosts(long pageNo) {
final String uri = hostListUrl + pageNo;
LOGGER.info("Crawling " + uri);
return webClient.get()
.uri(uri)
.exchange()
// Solution for #3
.retryWhen(companion -> companion
.zipWith(Flux.range(1, RETRY + 1), (error, index) -> {
String message = "Failed to crawl uri: " + error.getMessage();
if (index <= RETRY && (error instanceof RequestIntervalTooShortException
|| error instanceof ConnectTimeoutException
|| "Connection reset by peer".equals(error.getMessage())
)) {
LOGGER.info(message + ". Retries count: " + index);
return Tuples.of(error, index);
} else {
LOGGER.warn(message);
throw Exceptions.propagate(error); //terminate the source with the 4th `onError`
}
})
.map(tuple -> {
// Solution for #4
Throwable e = tuple.getT1();
int delaySeconds = tuple.getT2();
// TODO: Adjust these values according to your needs
if (e instanceof ConnectTimeoutException) {
delaySeconds = delaySeconds * 5;
} else if ("Connection reset by peer".equals(e.getMessage())) {
// The API that this app is calling will sometimes think that the requests are SPAM. So let's rest longer before retrying the request.
delaySeconds = delaySeconds * 10;
}
LOGGER.info("Will retry crawling after " + delaySeconds + " seconds to " + uri + ".");
return Mono.delay(Duration.ofSeconds(delaySeconds));
})
.doOnNext(s -> LOGGER.warn("Request is too short - " + uri + ". Retried at " + LocalDateTime.now()))
)
.flatMap(clientResponse -> clientResponse.toEntity(String.class))
.map(responseEntity -> {
HttpStatus statusCode = responseEntity.getStatusCode();
if (statusCode != HttpStatus.OK) {
Throwable exception;
// Convert json string to Java POJO
HostListResponse response = toHostListResponse(uri, statusCode, responseEntity.getBody());
// The API that I'm calling will return error code of 06 if request interval is too short
if (statusCode == HttpStatus.BAD_REQUEST && "06".equals(response.getError().getCode())) {
exception = new RequestIntervalTooShortException(uri);
} else {
exception = new IllegalStateException("Request to " + uri + " failed. Reason: " + responseEntity.getBody());
}
throw Exceptions.propagate(exception);
} else {
return toHostListResponse(uri, statusCode, responseEntity.getBody());
}
});
}
I use this to limit the number of active requests:
public DemoClass(WebClient.Builder webClientBuilder) {
AtomicInteger activeRequest = new AtomicInteger();
this.webClient = webClientBuilder
.baseUrl("http://httpbin.org/ip")
.filter(
(request, next) -> Mono.just(next)
.flatMap(a -> {
if (activeRequest.intValue() < 3) {
activeRequest.incrementAndGet();
return next.exchange(request)
.doOnNext(b -> activeRequest.decrementAndGet());
}
return Mono.error(new RuntimeException("Too many requests"));
})
.retryWhen(Retry.anyOf(RuntimeException.class)
.randomBackoff(Duration.ofMillis(300), Duration.ofMillis(1000))
.retryMax(50)
)
)
.build();
}
public Mono<String> call() {
return webClient.get()
.retrieve()
.bodyToMono(String.class);
}
We can customize ConnectionBuilder to rate limit the active connections on WebClient.
Need to add pendingAquiredMaxCount for number of waiting requests on queue as the default queue size is always 2 * maxConnections.
This rate limits the webclient to serve the requests at a time.
ConnectionProvider provider = ConnectionProvider.builder('builder').maxConnections(maxConnections).pendingAcquireMaxCount(maxPendingRequests).build()
TcpClient tcpClient = TcpClient
.create(provider)
WebClient client = WebClient.builder()
.baseUrl('url')
.clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
My goal here is to log time of a process without using xml files for configurations. By reading other posts I came up with enriching headers in the integration flow. This kinda works, but not for the right purpose. For every new started process it gives me a startTime when the application is launched (i.e. a constant). See below:
#Bean
public IntegrationFlow processFileFlow() {
return IntegrationFlows
.from(FILE_CHANNEL_PROCESSING)
.transform(fileToStringTransformer())
.enrichHeaders(h -> h.header("startTime", String.valueOf(System.currentTimeMillis())))
.handle(FILE_PROCESSOR, "processFile").get();
}
My goal is to properly log the process without using xml files like I said above but I don't manage to do this. I found an example and tried a solution with ChannelInterceptorAdapter like this:
#Component(value = "integrationLoggingInterceptor")
public class IntegrationLoggingInterceptor extends ChannelInterceptorAdapter {
private static final Logger log = LoggerFactory.getLogger(IntegrationLoggingInterceptor.class);
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
log.debug("Post Send - Channel " + channel.getClass());
log.debug("Post Send - Headers: " + message.getHeaders() + " Payload: " + message.getPayload() + " Message sent?: " + sent);
}
#Override
public Message<?> postReceive(Message<?> message, MessageChannel channel) {
try {
log.debug("Post Receive - Channel " + channel.getClass());
log.debug("Post Receive - Headers: " + message.getHeaders() + " Payload: " + message.getPayload());
} catch (Exception ex) {
log.error("Error in post receive : ", ex);
}
return message;
}
}
But I receive no logs at all. Any ideas?
The .enrichHeaders(h -> h.header("startTime", String.valueOf(System.currentTimeMillis()))) falls to this:
public <V> HeaderEnricherSpec header(String name, V value, Boolean overwrite) {
AbstractHeaderValueMessageProcessor<V> headerValueMessageProcessor =
new StaticHeaderValueMessageProcessor<>(value);
headerValueMessageProcessor.setOverwrite(overwrite);
return header(name, headerValueMessageProcessor);
}
Pay attention to the StaticHeaderValueMessageProcessor. So, what you show is really a constant.
If you need a value calculated for each message to process, you should consider to use Function-based variant:
.enrichHeaders(h ->
h.headerFunction("startTime",
m -> String.valueOf(System.currentTimeMillis())))