I have created a Reactor Netty HTTPSever which works fine, I wanted to add an delay for each response i.e if I send a request I should get the response after say 10 sec, I need this delay to test something. Below is the code used to create the server
DisposableServer server = HttpServer.create().port(port).protocol(HttpProtocol.H2C).metrics(true, s -> s)
.wiretap(false).handle((request, response) -> response.status(httpResponseStatus).send()).bindNow();
server.onDispose().block();
send() method returns Mono, so you can invoke the delay* operators that Mono provides. The example above can be changed like this (this example below delays headers and sends an empty response body):
DisposableServer server =
HttpServer.create()
.port(port)
.protocol(HttpProtocol.H2C)
.metrics(true, s -> s)
.wiretap(false)
.handle((request, response) ->
response.status(httpResponseStatus)
.send()
.delaySubscription(Duration.ofSeconds(10)))
.bindNow();
server.onDispose().block();
Update:
In case the I/O handler needs first to delay and then to send some non empty response, the example can be change to the one below:
DisposableServer server =
HttpServer.create()
.port(port)
.protocol(HttpProtocol.H2C)
.metrics(true, s -> s)
.wiretap(false)
.handle((request, response) ->
Mono.delay(Duration.ofSeconds(10))
.flatMap(l -> {
// Set status code if needed
response.status(httpResponseStatus);
return response.sendString(Mono.just(String.valueOf(responseCode)).then();
}))
.bindNow();
server.onDispose().block();
Related
I have got a situation that is needed to return HTTP 2XX when WebClient returns any kind of 4XX.
My existing code below,
public Mono<ResponseEntity<>String> postMethodA(String valueA) {
return webClient
.put()
.uri'/')
.bodyValue(valueA)
.retrieve()
.toEntity(String.class);
}
I added onStatus method like this.
public Mono<ResponseEntity<>String> postMethodA(String valueA) {
return webClient
.put()
.uri'/')
.bodyValue(valueA)
.retrieve()
.onStatus(HttpStatus::is4XXClientError response -> Mono.empty())
.toEntity(String.class);
}
If I added
onStatus(HttpStatus::is4XXClientError response -> Mono.empty())
still, it is not gonna work because it is not able to return 2XX.
Is there a way to change the Http status when returning the response? and can you please show some example?
To ignore an error response completely, and propagate neither response
nor error, use a filter, or add onErrorResume downstream, for example:
webClient.get()
.uri("https://someUrl.com/account/123")
.retrieve()
.bodyToMono(Account.class)
.onErrorResume(WebClientResponseException.class,
ex -> ex.getRawStatusCode() == 404 ? Mono.empty() : Mono.error(ex));
Reference: JavaDoc
I am using Spring WebClient to call REST API. I want to throw an error based on the response. For example, if there is an error (400) with body
{"error": "error message1 "}
then I want to throw an error with "error message1". Same way if there is an error(400) with the body
{"error_code": "100020"}
then I want to throw an error with error_cde 100020. I want to do it in a non-blocking way.
public Mono<Response> webclient1(...) {
webClient.post().uri(createUserUri).header(CONTENT_TYPE, APPLICATION_JSON)
.body(Mono.just(request), Request.class).retrieve()
.onStatus(HttpStatus::isError, clientResponse -> {
//Error Handling
}).bodyToMono(Response.class);
}
A body from ClientResponse should be extracted in a reactive way (javadoc) and lambda in onStatus method should return another Mono (javadoc). To sum up, take a look at below example
onStatus(HttpStatus::isError, response -> response
.bodyToMono(Map.class)
.flatMap(body -> {
var message = body.toString(); // here you should probably use some JSON mapper
return Mono.error(new Exception(message));
})
);
I have a controller proxy api endpoint where it receives different request payloads which are intended to different services. This controller validates payload and adds few headers based on certain rules. In this current context, i do not want to parse the received response from upstream services. proxy method should simply stream response to downstream clients so that it can scale well without going into any memory issues when dealing with large response payloads.
I have implemented method like this:
suspend fun proxyRequest(
url: String,
request: ServerHttpRequest,
customHeaders: HttpHeaders = HttpHeaders.EMPTY,
): ResponseEntity<String>? {
val modifiedReqHeaders = getHeadersWithoutOrigin(request, customHeaders)
val uri = URI.create(url)
val webClient = proxyClient.method(request.method!!)
.uri(uri)
.body(request.body)
modifiedReqHeaders.forEach {
val list = it.value.iterator().asSequence().toList()
val ar: Array<String> = list.toTypedArray()
#Suppress("SpreadOperator")
webClient.header(it.key, *ar)
}
return webClient.exchangeToMono { res ->
res.bodyToMono(String::class.java).map { b -> ResponseEntity.status(res.statusCode()).body(b) }
}.awaitFirstOrNull()
}
But this doesn't seems to be streaming. When i try to download large file, it is complaining failed to hold large data buffer.
Can someone help me in writing reactive streamed approach?
This is what i have done finally.
suspend fun proxyRequest(
url: String,
request: ServerHttpRequest,
response: ServerHttpResponse,
customHeaders: HttpHeaders = HttpHeaders.EMPTY,
): Void? {
val modifiedReqHeaders = getHeadersWithoutOrigin(request, customHeaders)
val uri = URI.create(url)
val webClient = proxyClient.method(request.method!!)
.uri(uri)
.body(request.body)
modifiedReqHeaders.forEach {
val list = it.value.iterator().asSequence().toList()
val ar: Array<String> = list.toTypedArray()
#Suppress("SpreadOperator")
webClient.header(it.key, *ar)
}
val respEntity = webClient
.retrieve()
.toEntityFlux<DataBuffer>()
.awaitSingle()
response.apply {
headers.putAll(respEntity.headers)
statusCode = respEntity.statusCode
}
return response.writeWith(respEntity.body ?: Flux.empty()).awaitFirstOrNull()
}
Let me know if this is truly sending data downstream and flushing?
Your first code snippet fails with memory issues because it is buffering in memory the whole response body as a String and forwards it after. If the response is quite large, you might fill the entire available memory.
The second approach also fails because instead of returning the entire Flux<DataBuffer> (so the entire response as buffers), you're only returning the first one. This fails because the response is incomplete.
Even if you manage to fix this particular issue, there are many other things to pay attention to:
it seems you're not returning the original response headers, effectively changing the response content type
you should not forward all the incoming response headers, as some of them are really up to the server (like transfer encoding)
what happens with security-related request/response headers?
how are you handling tracing and metrics?
You could take a look at the Spring Cloud Gateway project, which handles a lot of those subtleties and let you manipulate requests/responses.
I have some controller method like
#PostMapping("/*")
fun proxy(#RequestBody body: String): Mono<ByteArray> {
return roundRobinBean.getNext()
.post()
.uri("/api")
.body(BodyInserters.fromObject(body))
.retrieve()
.bodyToMono<ByteArray>()
.doOnSuccess{
threadPool.submit(PutToCacheJob(body, it, cacheBean))
}
.doOnError{
logger.error(it.message, it)
}
}
roundRobinBean return WebClient for some host. If i get connection timeout exception or get 500 response i need call another host or return data from cache. Have mono some handler for changing inner data?
You can use onErrorResume operator which lets you define a fallback in case of errors.
I am geting clientabort sockettimeout read exception at server side while invoking a rest service through https client inside vertx application. If i invoke a http setvice, it works fine though.
I get 200 ok in vertx and do not get any data back. And also i get connection was closed error in vertx.
Any idea why it happens. Help appreciated.
Code:
final HttpClient httpClient1 = vertx.createHttpClient(
new HttpClientOptions()
.setDefaultHost("localhost")
.setDefaultPort(8443)
.setSsl(true)
.setKeepAlive(true)
.setMaxPoolSize(100)
.setTrustAll(true)
);
HttpClientRequest req = httpClient1.request(HttpMethod.POST, "/api/test/");
req.headers()
.set("Content-Length","10000000")
.set("Content-Type","application/json")
.set("Cache-Control", "no-transform, max-age=0");
Buffer body=Buffer.buffer("Hello World");
req.write(body);
That's not the way to send a request.
Hope working example may help you.
Server:
final Vertx vertx = Vertx.vertx();
vertx.createHttpServer().requestHandler((c) -> {
c.bodyHandler(b -> {
System.out.println(b.toString());
});
c.response().end("ok");
}).listen(8443);
Client:
final HttpClient client = vertx.createHttpClient(
new HttpClientOptions()
.setDefaultHost("localhost")
.setDefaultPort(8443));
client.request(HttpMethod.POST, "/", (r) -> {
System.out.println("Got response");
}).putHeader("Content-Length", Integer.toString("Hello".length()))
.write("Hello").end();
Two important notes: you must end .request() with .end() and you must set Content-Length correctly.