I've come across with this problem
Pooled connection observed an error
reactor.netty.http.client.HttpClientOperations$PrematureCloseException:
Connection prematurely closed BEFORE response".
I'm gathering metrics from the graphite server via reactive web-client for the requested timeframes (to reduce the amount of data that transfers via http I've divided days into chunks 24/4), then combine responses into a matrix and save it to csv file -> merge to another one.
The problem appears when the number of days increases (2 or 3 works fine, but more days will be more errors with closed connections happened). Tried to use delays, it helps a bit, but to process one more day without errors.
Stack-trace:
ClosedConnectionStacktrace
Found a bit similar issue https://github.com/reactor/reactor-netty/issues/413 , but not sure.
Here's the code snippets:
discoveryMono.thenReturn(true) // discover metrics
.flux()
.flatMap(m -> Flux.fromIterable(dates) // process all days
.delayElements(Duration.ofSeconds(1L))
.flatMap(date -> Flux.range(0, 24 / intervalHours) // divide day into chunks
.delayElements(Duration.of(100L, ChronoUnit.MILLIS))
.flatMap(timeFraction -> Flux.fromIterable(sequentialTasks) // task to invoke webclient
.flatMap(task -> {
Instant from = date.plus(timeFraction * intervalHours, ChronoUnit.HOURS);
Instant until = from.plus(intervalHours, ChronoUnit.HOURS);
TaskParams taskParams = new TaskParams(itSystem, from, until, TaskParams.PollingType.FULLDAY);
log.trace("workflow | from={}, until={}", from, until);
return task.apply(taskParams)
// .doOnNext(m -> log.trace("Matrix: {}", m))
.onErrorResume(err -> {
log.error("processFullDaysInChunks | Error: {}", err);
return Mono.empty();
});
}).flatMap(params -> Flux.fromIterable(fileTasks) // tasks to check/merge files, doesn't matter
.flatMap(fileTask -> parTask.apply(params)
.onErrorResume(err -> {
log.error("processFullDaysInChunks | Error: {}", err);
return Mono.empty();
})
)
)
)
)
).subscribeOn(fullDayScheduler).subscribe();
and part of the task with webclient invokation:
private Flux<GraphiteResultDTO> getGraphiteResults(ITSystem itSystem, Instant from, Instant until) {
String fromStr = FROM_PARAMETER + Long.valueOf(from.getEpochSecond()).toString();
String untilStr = UNTIL_PARAMETER + Long.valueOf(until.getEpochSecond()).toString();
String uri = RENDER_URI + TARGET_PARAMETER + "{targetParam}" + fromStr + untilStr + FORMAT_JSON_PARAMETER;
WebClient webClient = getGraphiteWebClient(itSystem.getDataSource());
Set<String> targetParams = storage.getValueByITSystemId(itSystem.getId()).getSecond();
Flux<GraphiteResultDTO> result = Flux.fromIterable(targetParams)
.delayElements(Duration.of(10, ChronoUnit.MILLIS))
.flatMap(targetParam -> {
Map<String, String> params = Map.ofEntries(entry("targetParam", targetParam));
if (log.isTraceEnabled()) {
log.trace("getGraphiteResults | Uri={}, TargetPatam: {}", uri, targetParam);
}
return webClient.get()
.uri(uri, params)
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> {
log.trace("clientResponse | transforming body");
clientResponse.bodyToMono(String.class)
.doOnNext(errorString -> log.error("retrieve(), error={}", errorString));
// .flatMap(s -> Flux.error(clientResponse.bodyToFlux(WebClientException.class)));
return Mono.empty();
})
.bodyToFlux(GraphiteResultDTO.class)
.onErrorResume(throwable -> {
log.error("webclient | bodyToFlux error={}", throwable.getMessage());
return Flux.empty();
});
});
return result;
}
Resolved my problem with replacing the flatMap operator with concatMap with prefetch 1 and limiting the rate (limitRate operator). All the requests now process one by one sequentially. So there is no need to use time delays now.
Related
I want to render an object composed of two mono or flux elements (below a code snippet):
Mono<List<NodeDTO>> nodeDTOFlux = this.webClient
.get()
.uri(NODES_WITH_LIMIT + limit)
.retrieve()
.onStatus(HttpStatus::isError,
response -> response.bodyToMono(String.class).flatMap(
msg -> Mono.error(new ApiCallException(msg, response.statusCode())))
)
.bodyToFlux(new ParameterizedTypeReference<Node>() {
}).map(node -> nodeMapper.toNodeDTO(node))
.collectList();
Mono<List<EdgeDTO>> edgeDTOFlux = this.webClient
.get()
.uri(EDGES_WITH_LIMIT + limit)
.retrieve()
.onStatus(HttpStatus::isError,
response -> response.bodyToMono(String.class).flatMap(
msg -> Mono.error(new ApiCallException(msg, response.statusCode())))
)
.bodyToFlux(new ParameterizedTypeReference<Edge>() {
}).map(edge -> edgeMapper.toEdgeDTO(edge))
.collectList();
I tried with zip() method but it's not what I aim to do
I tried to return an object like this
GraphDataDTO graphDataDTO = new GraphDataDTO();
graphDataDTO.setEdgeDTOS(edgeDTOFlux);
graphDataDTO.setNodeDTOS(nodeDTOFlux);
I have a result in my console but the object returned
{
"nodeDTOS": {
"scanAvailable": true
},
"edgeDTOS": {
"scanAvailable": true
}
}
the return is done before getting all the flux.. is there any solution without blocking !
thanks in advance.
This should work:
return Mono.zip(nodeDTOFlux, edgeDTOFlux)
.map(tuple2 -> GraphDataDTO.builder().nodeDTO(tuple2.getT1()).edgeDTO(tuple2.getT2()).build())
It creates a Tuple of NodeDTO and EdgeDTO and maps it into GraphDataDTO.
I tried to use Vertx HttpClient/WebClient to consume the GraphQLSubscritpion but it did not work as expected.
The server-side related code(written with Vertx Web GraphQL) is like the following, when a comment is added, then trigger onNext to send the comment to the Publisher.
public VertxDataFetcher<UUID> addComment() {
return VertxDataFetcher.create((DataFetchingEnvironment dfe) -> {
var commentInputArg = dfe.getArgument("commentInput");
var jacksonMapper = DatabindCodec.mapper();
var input = jacksonMapper.convertValue(commentInputArg, CommentInput.class);
return this.posts.addComment(input)
.onSuccess(id -> this.posts.getCommentById(id.toString())
.onSuccess(c ->subject.onNext(c)));
});
}
private BehaviorSubject<Comment> subject = BehaviorSubject.create();
public DataFetcher<Publisher<Comment>> commentAdded() {
return (DataFetchingEnvironment dfe) -> {
ConnectableObservable<Comment> connectableObservable = subject.share().publish();
connectableObservable.connect();
return connectableObservable.toFlowable(BackpressureStrategy.BUFFER);
};
}
In the client, I mixed to use the HttpClient/WebClient, most of the time, I would like to use WebClient, which easier for handling form post. But it seems it does not work have a WebSocket connection.
So the websocket part is returning to use HttpClient.
var options = new HttpClientOptions()
.setDefaultHost("localhost")
.setDefaultPort(8080);
var httpClient = vertx.createHttpClient(options);
httpClient.webSocket("/graphql")
.onSuccess(ws -> {
ws.textMessageHandler(text -> log.info("web socket message handler:{}", text));
JsonObject messageInit = new JsonObject()
.put("type", "connection_init")
.put("id", "1");
JsonObject message = new JsonObject()
.put("payload", new JsonObject()
.put("query", "subscription onCommentAdded { commentAdded { id content } }"))
.put("type", "start")
.put("id", "1");
ws.write(messageInit.toBuffer());
ws.write(message.toBuffer());
})
.onFailure(e -> log.error("error: {}", e));
// this client here is WebClient.
client.post("/graphql")
.sendJson(Map.of(
"query", "mutation addComment($input:CommentInput!){ addComment(commentInput:$input) }",
"variables", Map.of(
"input", Map.of(
"postId", id,
"content", "comment content of post id" + LocalDateTime.now()
)
)
))
.onSuccess(
data -> log.info("data of addComment: {}", data.bodyAsString())
)
.onFailure(e -> log.error("error: {}", e));
When running the client and server, the comment is added, but the WebSocket client does not print any info about websocket message. On the server console, there is an message like this.
2021-06-25 18:45:44,356 DEBUG [vert.x-eventloop-thread-1] graphql.GraphQL: Execution '182965bb-80de-416d-b5fe-fe157ab87f1c' completed with zero errors
It seems the backend commentAdded datafetcher is not invoked at all.
The complete codes of GraphQL client and server are shared on my Github.
After reading some testing codes of Vertx Web GraphQL, I found I have to add the ConnectionInitHandler on ApolloWSHandler like this.
.connectionInitHandler(connectionInitEvent -> {
JsonObject payload = connectionInitEvent.message().content().getJsonObject("payload");
if (payload != null && payload.containsKey("rejectMessage")) {
connectionInitEvent.fail(payload.getString("rejectMessage"));
return;
}
connectionInitEvent.complete(payload);
}
)
When the client sends connection_init message, the connectionInitEvent.complete is required to start the communication between the client and the server.
I have a webhook service that sends events to different sources (URLs). By design, the request timeout is 10s, if it fails, retries to send 3 times. In case, all retries are failed, a code must be executed to disable that URL in DB.
So far, I managed to retry and with delay of 5 seconds. But, I'm not sure how to execute code after failure.
try{
String body = objectMapper.writeValueAsString(webhookDTO);
webClient.post()
.uri(webhook.getUrl())
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(body)
.exchange()
.timeout(Duration.ofSeconds(5))
.retryWhen(Retry.backoff(3, Duration.ofSeconds(5))
.jitter(0d)
.doAfterRetry(retrySignal -> {
logger.info("Retried " + retrySignal.totalRetries());
})
.onRetryExhaustedThrow((retryBackoffSpec, retrySignal)
-> new WebhookTimeoutException()))
.doOnSuccess(clientResponse -> {
logger.info("Event is received by " + client);
})
.subscribe();
} catch (Exception e) {
logger.error("Error on webhook dispatcher: ", e);
}
Can anyone give some examples of how to do this?
You are almost there! Just use doOnError as shown here. The idea here, once after all the attempts failed, you throw WebhookTimeoutException. The doOnError is called only when the error is thrown & updates the DB. The exception class is optional. You can ignore that.
webClient.post()
.uri(webhook.getUrl())
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(body)
.exchange()
.timeout(Duration.ofSeconds(5))
.retryWhen(Retry.backoff(3, Duration.ofSeconds(5))
.jitter(0d)
.doAfterRetry(retrySignal -> {
logger.info("Retried " + retrySignal.totalRetries());
})
.onRetryExhaustedThrow((retryBackoffSpec, retrySignal)
-> new WebhookTimeoutException()))
.doOnSuccess(clientResponse -> {
logger.info("Event is received by " + client);
})
.doOnError(WebhookTimeoutException.class, (msg) -> {
System.out.println("Message :: " + msg);
// here update the DB
dbRepository.save(...);
})
.subscribe();
I was wondering if someone could eyeball the following code snippet and tell me why the SubscriberContext inside the doOnError is not trigerred
public Mono<ServerResponse> handlePlatformAuthenticationResponse(final ServerRequest serverRequest) {
Mono<MultiValueMap<String, String>> formData = serverRequest.body(BodyExtractors.toFormData());
return formData
.flatMap(this::provisionUserAndClass)
.flatMap(tuple -> Mono.subscriberContext()
.map(context -> {
// this is invoked if provisionUserAndClass completes successfully
TelemetryData telemetryData = context.get(TELEMETRY_DATA);
LTILaunchRequest<LTILaunchRequestSettings> launchRequest = tuple.getT2();
this.addLaunchDetailsToTelemetryContext(launchRequest, telemetryData);
return tuple;
}))
.doOnError(error -> Mono.subscriberContext()
.map(context -> {
// this is never invoked if provisionUserAndClass returns a Mono.error
TelemetryData telemetryData = context.get(TELEMETRY_DATA);
// log telemetryData + error message
}))
.subscriberContext(context -> context.put(TELEMETRY_DATA, new TelemetryData()));
}
private Mono<Tuple2<ClassAndUserProvisioningResponse, LTILaunchRequest<LTILaunchRequestSettings>>> provisionUserAndClass(
LTILaunchRequest<LTILaunchRequestSettings> ltiLaunchRequest) {
// returning a Mono.error just to see behavior of Mono.subscriberContext() when error occurs. Actual code will call a service method
return Mono.error(new ProvisioningException("fake"));
}
To access context in case of error you could use doOnEach operator:
.doOnEach(signal -> {
if (signal.isOnError())
{
TelemetryData telemetryData = signal.getContext().get(TELEMETRY_DATA);
Throwable error = signal.getThrowable();
// ...
}
})
Mono.subscriberContext() can only be used meaningfully in operators where you have to return a Mono, like flatMap, concatMap, etc, but not in side-effect operators where there is nothing that would subscribe to the Mono<Context>.
.doOnError(error -> Mono.subscriberContext()
.map(context -> {
// this is never invoked if provisionUserAndClass returns a Mono.error
TelemetryData telemetryData = context.get(TELEMETRY_DATA);
// log telemetryData + error message
}).subscribe())
You forgot to subscribe to the Mono.subscriberContext().
The test below passes when I use monoFromSupplier as selectedMono.
However, when I switch to monoFromWebClient it doesn't advance time properly. What am I doing wrong here?
StepVerifier.withVirtualTime(() -> {
Mono<String> monoFromSupplier = Mono.fromSupplier(() -> "AA")
.doOnNext(po -> {
System.out.println("monoFromSupplier:onNext " + Thread.currentThread().getName());
});
Mono<String> monoFromWebClient = WebClient.create("http://...")
.get()
.retrieve()
.bodyToMono(String.class)
.doOnNext(po -> {
System.out.println("monoFromWebClient:onNext " + Thread.currentThread().getName());
});
Mono<?> selectedMono = monoFromSupplier;
return selectedMono.repeatWhen(companion -> companion.take(3)
.delayUntil(r -> {
Duration dur = Duration.ofSeconds(500);
System.out.println("delay... " + dur);
return Mono.delay(dur);
}))
.last()
.log();
})
.thenAwait(Duration.ofDays(1))
.expectNextCount(1)
.expectComplete()
.verify();
Reactor virtual time support only works within a single JVM - it works by changing the Scheduler's clock (often making it tick faster). WebClient here crosses a network boundary and sends a real HTTP request - Reactor can't manipulate the real, physical time.
TL;DR; this is not supported.