In one of our service I tried to configure AWS signing in Spring data Reactive Elasticsearch configuration.
Spring provides the configuring the webclient through webclientClientConfigurer
ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo("localhost:9200")
.usingSsl()
.withWebClientConfigurer(
webClient -> {
return webClient.mutate().filter(new AwsSigningInterceptor()).build();
})
. // ... other options to configure if required
.build();
through which we can configure to sign the requests but however AWS signing it requires url, queryparams, headers and request body(in case of POST,POST) to generate the signed headers.
Using this I created a simple exchange filter function to sign the request but in this function I was not able to access the request body and use it.
Below is the Filter function i was trying to use
#Component
public class AwsSigningInterceptor implements ExchangeFilterFunction
{
private final AwsHeaderSigner awsHeaderSigner;
public AwsSigningInterceptor(AwsHeaderSigner awsHeaderSigner)
{
this.awsHeaderSigner = awsHeaderSigner;
}
#Override
public Mono<ClientResponse> filter(ClientRequest request, ExchangeFunction next)
{
Map<String, List<String>> signingHeaders = awsHeaderSigner.createSigningHeaders(request, new byte[]{}, "es", "us-west-2"); // should pass request body bytes in place of new byte[]{}
ClientRequest.Builder requestBuilder = ClientRequest.from(request);
signingHeaders.forEach((key, value) -> requestBuilder.header(key, value.toArray(new String[0])));
return next.exchange(requestBuilder.build());
}
}
I also tried to access the request body inside ExchangeFilterFunction using below approach but once i get the request body using below approach.
ClientRequest.from(newRequest.build())
.body(
(outputMessage, context) -> {
ClientHttpRequestDecorator loggingOutputMessage =
new ClientHttpRequestDecorator(outputMessage) {
#Override
public Mono<Void> writeWith(Publisher<? extends DataBuffer> body) {
log.info("Inside write with method");
body =
DataBufferUtils.join(body)
.map(
content -> {
// Log request body using
// 'content.toString(StandardCharsets.UTF_8)'
String requestBody =
content.toString(StandardCharsets.UTF_8);
Map<String, Object> signedHeaders =
awsSigner.getSignedHeaders(
request.url().getPath(),
request.method().name(),
multimap,
requestHeadersMap,
Optional.of(
requestBody.getBytes(StandardCharsets.UTF_8)));
log.info("Signed Headers generated:{}", signedHeaders);
signedHeaders.forEach(
(key, value) -> {
newRequest.header(key, value.toString());
});
return content;
});
log.info("Before returning the body");
return super.writeWith(body);
}
#Override
public Mono<Void>
setComplete() { // This is for requests with no body (e.g. GET).
Map<String, Object> signedHeaders =
awsSigner.getSignedHeaders(
request.url().getPath(),
request.method().name(),
multimap,
requestHeadersMap,
Optional.of("".getBytes(StandardCharsets.UTF_8)));
log.info("Signed Headers generated:{}", signedHeaders);
signedHeaders.forEach(
(key, value) -> {
newRequest.header(key, value.toString());
});
return super.setComplete();
}
};
return originalBodyInserter.insert(loggingOutputMessage, context);
})
.build();
But with above approach I was not able to change the request headers as adding headers throws UnsupportedOperationException inside writewith method.
Has anyone used the spring data reactive elastic search and configured to sign with AWS signed headers?
Any help would be highly appreciated.
Related
I'm wondering if it's possible to achieve 2 ways streaming using Spring Webflux?
Basically, I'm looking to make the client to send a flux of data that the server receives maps them to String then return the result, all fluently without having to collect data.
I did it using RSocket but I'm wondering if I can get the same result using http 2.0 (with Spring and Project-Reactor).
Tried doing like this:
1- Client:
public Mono<Void> stream() {
var input = Flux.range(1, 10).delayElements(Duration.ofMillis(500));
return stockWebClient.post()
.uri("/stream")
.body(BodyInserters.fromPublisher(input, Integer.class))
.accept(MediaType.TEXT_EVENT_STREAM)
.retrieve()
.bodyToFlux(String.class)
.log()
.then();
}
2- Server:
#PostMapping(value = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> stream(#RequestBody Integer i) {
return Flux.range(i, i+10).map(n -> String.valueOf(i)).log();
}
Or:
public Flux<String> stream(#RequestBody Flux<Integer> i) {
return i.map(n -> String.valueOf(i)).log();
}
Or:
#PostMapping(value = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> stream(#RequestBody List<Integer> i) {
return Flux.fromIterable(i).map(n -> String.valueOf(i)).log();
}
None worked correctly.
If you want use Server Sent Event you need to return a Flux<ServerSentEvent<String>>.
So your server merthod should be:
#PostMapping(value = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<String>> stream(#RequestBody Integer i) {
return Flux.range(i, i + 10).map(n -> ServerSentEvent.builder(String.valueOf(n)).build());
}
But in this case the body is only an Integer and your client code becomes:
input.flatMap(i ->
stockWebClient
.post()
.uri("/stream")
.bodyValue(i)
.accept(MediaType.TEXT_EVENT_STREAM)
.retrieve()
.bodyToFlux(new ParameterizedTypeReference<ServerSentEvent<String>>() {})
.mapNotNull(ServerSentEvent::data)
.log())
.blockLast();
You can also do the same with functional endpoint.
If you want to be able to stream data from the client to the server and back you won't be able to use SSE but you can achieve this with websocket.
You will need a HandlerMapping and a WebSocketHandler
public class TestWebSocketHandler implements WebSocketHandler {
#Override
public Mono<Void> handle(WebSocketSession session) {
Flux<WebSocketMessage> output = session.receive()
.map(WebSocketMessage::getPayloadAsText)
.map(Integer::parseInt)
.concatMap(i -> Flux.range(i, i + 10).map(String::valueOf))
.map(session::textMessage);
return session.send(output);
}
}
The configuration with the handler :
#Bean
public TestWebSocketHandler myHandler() {
return new TestWebSocketHandler();
}
#Bean
public HandlerMapping handlerMapping(final TestWebSocketHandler myHandler) {
Map<String, WebSocketHandler> map = new HashMap<>();
map.put("/streamSocket", myHandler);
int order = -1; // before annotated controllers
return new SimpleUrlHandlerMapping(map, order);
}
On the client side:
var input2 = Flux.range(1, 10).delayElements(Duration.ofMillis(500));
WebSocketClient client = new ReactorNettyWebSocketClient();
client.execute(URI.create("http://localhost:8080/streamSocket"), session ->
session.send(input2.map(i -> session.textMessage("" + i))).then(session.receive().map(WebSocketMessage::getPayloadAsText).log().then())
).block();
I am writing a spring 5 web app and my requirement is to get a urlencoded form and in response send url encoded response back
This is Router Function code
#Configuration
public class AppRoute {
#Bean
public RouterFunction<ServerResponse> route(FormHandler formHandler) {
return RouterFunctions.route()
// .GET("/form", formHandler::sampleForm)
// .POST("/form", accept(MediaType.APPLICATION_FORM_URLENCODED), formHandler::displayFormData)
.POST("/formnew", accept(MediaType.APPLICATION_FORM_URLENCODED).and(contentType(MediaType.APPLICATION_FORM_URLENCODED)), formHandler::newForm)
.build();
}
}
and here's my Handler code
public Mono<ServerResponse> newForm(ServerRequest request) {
Mono<MultiValueMap<String, String>> formData = request.formData();
MultiValueMap<String, String> newFormData = new LinkedMultiValueMap<String, String>();
formData.subscribe(p -> newFormData.putAll(p));
newFormData.add("status", "success");
return ServerResponse.ok().contentType(MediaType.APPLICATION_FORM_URLENCODED)
.body(fromObject(newFormData));
}
Here's the error I get
2020-04-07 02:37:33.329 DEBUG 38688 --- [ctor-http-nio-3] org.springframework.web.HttpLogging : [07467aa5] Resolved [UnsupportedMediaTypeException: Content type 'application/x-www-form-urlencoded' not supported for bodyType=org.springframework.util.LinkedMultiValueMap] for HTTP POST /formnew
Whats the issue here. I couldn't find any way to write the url encoded response back.
Could anyone point what's the issue.
Try to refactor your code to functional style:
public Mono<ServerResponse> newForm(ServerRequest request) {
Mono<DataBuffer> resultMono = request.formData()
.map(formData -> new LinkedMultiValueMap(formData))
.doOnNext(newFormData -> newFormData.add("status", "success"))
.map(linkedMultiValueMap -> createBody(linkedMultiValueMap));
return ServerResponse.ok().contentType(MediaType.APPLICATION_FORM_URLENCODED)
.body(BodyInserters.fromDataBuffers(resultMono));
}
private DataBuffer createBody(MultiValueMap multiValueMap) {
try {
DefaultDataBufferFactory factory = new DefaultDataBufferFactory();
return factory.wrap(ByteBuffer.wrap(objectMapper.writeValueAsString(multiValueMap).getBytes(StandardCharsets.UTF_8)));
} catch (JsonProcessingException e) {
throw new IllegalArgumentException("incorrect body");
}
}
I am trying to validate and log form data that goes through Spring Cloud Gateway. I have tried a few methods and encounter a few problems and I could not read it properly. I have tried:
#Component
public class GatewayRequestFilter {
#Bean
public GlobalFilter apply() {
return (exchange, chain) -> {
MediaType contentType = exchange.getRequest().getHeaders().getContentType();
ModifyRequestBodyGatewayFilterFactory.Config modifyRequestConfig = new ModifyRequestBodyGatewayFilterFactory.Config();
/// Method 1
if (contentType.includes(MediaType.MULTIPART_FORM_DATA)) {
modifyRequestConfig.setRewriteFunction(String.class, String.class, (exchange1, originalRequestBody) -> {
validateAndAuditLog(exchange1, originalRequestBody);
return Mono.just(originalRequestBody);
});
}
/// Method 2
if (contentType.includes(MediaType.MULTIPART_FORM_DATA)) {
return exchange.getMultipartData().flatMap(originalRequestBody -> {
validateAndAuditLog(exchange1, originalRequestBody);
return chain.filter(exchange);
});
}
/// Method 3:
/// https://github.com/spring-cloud/spring-cloud-gateway/issues/1307#issuecomment-553910834
return new ModifyRequestBodyGatewayFilterFactory().apply(modifyRequestConfig).filter(exchange, chain);
};
}
}
For the 1st and 3rd method, if I set inClass as String.class then I can see data in some kind of http format. The problem is that I don't know how to parse it into hashMap or LinkedMultiValueMap to access each of value using key. Here is the output I get:
----------------------------162653831591335516327921
Content-Disposition: form-data; name="simple-text"
text
----------------------------162653831591335516327921
Content-Disposition: form-data; name="simple-file"; filename="simple-file"
Content-Type: application/octet-stream
Simple file
----------------------------162653831591335516327921--
If I change inClass as Object.class then there is another error:
{
"timestamp": "2020-04-03T02:37:57.096+0000",
"path": "/tc/test/test",
"status": 500,
"error": "Internal Server Error",
"message": "Content type 'multipart/form-data;boundary=--------------------------537619313111072161580699' not supported for bodyType=java.lang.Object",
"requestId": "0592497a-1"
}
For the 2nd method I can get data in LinkedMultiValueMap which is good because I can read each data using key value and I can also get uploaded files name, but the problem is that, it hang for 10s before pass the request to down stream.
Anyone has any idea what should I do to read or modify form data that goes through Spring Cloud Gateway?
Rewriting the answer with example.
Basic approach is defined here, though it needs lot of refinement to work for multi-part.
https://developpaper.com/question/how-to-modify-the-request-parameters-of-multipart-form-data-format-in-spring-cloud-gateway/
For any approach to work once you read the data, you need to set a modified request object to exchange downstream to be processed again. Setting the new multi-part object downstream is bit tricky because there is not a straightforward way to convert string->multi-part->string.
Here is a sample code based on the approach. Note that this for now works only if multi-part contains form fields and not file type fields, because in later case we are dealing with a stream, which can be embedded anywhere within the entire multi-part request, and it is not possible to modify such request without blocking calls, which the netty does not allow.
private final List<HttpMessageReader<?>> messageReaders = HandlerStrategies.withDefaults().messageReaders();
public GatewayFilter apply(Config config) {
return new OrderedGatewayFilter((exchange, chain) -> {
ServerRequest serverRequest = ServerRequest.create(exchange, messageReaders);
// get modified body from original body o
Mono<MultiValueMap<String, String>> modifiedBody = serverRequest.bodyToMono(String.class).flatMap(o -> {
// create mock request to read body
SynchronossPartHttpMessageReader synchronossReader = new SynchronossPartHttpMessageReader();
MultipartHttpMessageReader reader = new MultipartHttpMessageReader(synchronossReader);
MockServerHttpRequest request = MockServerHttpRequest.post("").contentType(exchange.getRequest().getHeaders().getContentType()).body(o);
Mono<MultiValueMap<String, Part>> monoRequestParts = reader.readMono(MULTIPART_DATA_TYPE, request, Collections.emptyMap());
// modify parts
return monoRequestParts.flatMap(requestParts -> {
Map<String, List<String>> modifedBodyArray = requestParts.entrySet().stream().map(entry -> {
String key = entry.getKey();
LOGGER.info(key);
List<String> entries = entry.getValue().stream().map(part -> {
LOGGER.info("{}", part);
// read the input part
String input = ((FormFieldPart) part).value();
// return the modified input part
return new String(modifyRequest(config, exchange, key, input));
}).collect(Collectors.toList());
return new Map.Entry<String, List<String>>() {
#Override
public String getKey() {
return key;
}
#Override
public List<String> getValue() {
return entries;
}
#Override
public List<String> setValue(List<String> param1) {
return param1;
}
};
}).collect(Collectors.toMap(k -> k.getKey(), k -> k.getValue()));
return Mono.just(new LinkedMultiValueMap<String, String>(modifedBodyArray));
});
});
// insert the new modified body
BodyInserter bodyInserter = BodyInserters.fromPublisher(modifiedBody, new ParameterizedTypeReference<MultiValueMap<String, String>>() {});
HttpHeaders headers = new HttpHeaders();
headers.putAll(exchange.getRequest().getHeaders());
// the new content type will be computed by bodyInserter
// and then set in the request decorator
headers.remove(HttpHeaders.CONTENT_LENGTH);
CachedBodyOutputMessage outputMessage = new CachedBodyOutputMessage(exchange, headers);
return bodyInserter.insert(outputMessage, new BodyInserterContext())
.then(Mono.defer(() -> {
ServerHttpRequest decorator = decorate(exchange, headers, outputMessage);
return chain.filter(exchange.mutate().request(decorator).build());
}));
}, RouteToRequestUrlFilter.ROUTE_TO_URL_FILTER_ORDER + 1);
}
// some of the helper methods
private String modifyRequest(Config config, ServerWebExchange exchange, String key, String input) {
// do your thing in here !!!
return input;
}
private ServerHttpRequestDecorator decorate(ServerWebExchange exchange, HttpHeaders headers, CachedBodyOutputMessage outputMessage) {
return new ServerHttpRequestDecorator(exchange.getRequest()) {
#Override
public HttpHeaders getHeaders() {
long contentLength = headers.getContentLength();
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.putAll(headers);
if (contentLength > 0) {
httpHeaders.setContentLength(contentLength);
} else {
// TODO: this causes a 'HTTP/1.1 411 Length Required' // on httpbin.org
httpHeaders.set(HttpHeaders.TRANSFER_ENCODING, "chunked");
}
return httpHeaders;
}
#Override
public Flux<DataBuffer> getBody() {
return outputMessage.getBody();
}
};
}
I have Spring Boot microservice, and sending large payload using swagger. At the server I get only 15000 chars and reset 2000 chars are not read.
How can I use ReadBodyPredicateFactory to cache the body message text?
I am using springcloudgateway and added filters. In the filter in apply method I am trying to read the payload json using
DefaultServerRequest serverRequest = new DefaultServerRequest(exchange);
body = serverRequest.bodyToMono(String.class).toFuture().get();
Sometimes it hangs.
I tried with Flux and then i get only half message
Flux body = request.getBody();
body.subscribe(buffer -> {
try {
System.out.println("byte count:" +
buffer.readableByteCount());
byte[] bytes = new byte[buffer.readableByteCount()];
buffer.read(bytes);
DataBufferUtils.release(buffer);
String bodyString = new String(bytes, StandardCharsets.UTF_8);
sb.append(bodyString);
} catch (Exception e) {
e.printStackTrace();
}
Recently, I needed the similar thing in my application and I've found that it can be achieved by Spring Cloud Gateway built-in caching in ServerWebExchangeUtils
Before filters that use request content in some business cases, I created a filter that only forces content caching:
#Component
class CachingRequestBodyFilter extends AbstractGatewayFilterFactory<CachingRequestBodyFilter.Config> {
public CachingRequestBodyFilter() {
super(Config.class);
}
public GatewayFilter apply(final Config config) {
return (exchange, chain) -> ServerWebExchangeUtils.cacheRequestBody(exchange,
(serverHttpRequest) -> chain.filter(exchange.mutate().request(serverHttpRequest).build()));
}
public static class Config {
}
}
In any of the subsequent filters, we can extract the content of the request body, as below:
// some ReadRequestBodyFilter filter
public GatewayFilter apply(final Config config) {
return (exchange, chain) -> {
final var cachedBody = new StringBuilder();
final var cachedBodyAttribute = exchange.getAttribute(CACHED_REQUEST_BODY_ATTR);
if (!(cachedBodyAttribute instanceof DataBuffer)) {
// caching gone wrong error handling
}
final var dataBuffer = (DataBuffer) cachedBodyAttribute;
cachedBody.append(StandardCharsets.UTF_8.decode(dataBuffer.asByteBuffer()).toString());
final var bodyAsJson = cachedBody.toString();
// some processing
return chain.filter(exchange);
};
}
Then the gateway configuration would look like this:
spring:
cloud:
gateway:
routes:
- [...]
filters:
- CachingRequestBodyFilter
- ReadRequestBodyFilter
I want to use interceptor to add authorization header to every request made via rest template. I am doing it like this:
public FirebaseCloudMessagingRestTemplate(#Autowired RestTemplateBuilder builder, #Value("fcm.server-key") String serverKey) {
builder.additionalInterceptors(new ClientHttpRequestInterceptor() {
#Override
public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException {
request.getHeaders().add("Authorization", "key=" + serverKey);
System.out.println(request.getHeaders());
return execution.execute(request, body);
}
});
this.restTemplate = builder.build();
}
However when I do this
DownstreamHttpMessageResponse response = restTemplate.postForObject(SEND_ENDPOINT, request, DownstreamHttpMessageResponse.class);
Interceptor is not called (Iv put breakpoint in it and it did not fire). Request is made and obvious missing auth key response is returned. Why is my interceptor not called?
Ok I know whats happening. After checking build() implementation I discovered that RestTemplateBuilder is not changing self state when calling additionalInterceptors but returns a new builder with given interceptors. Chaining calls solves the issue.
public FirebaseCloudMessagingRestTemplate(final #Autowired RestTemplateBuilder builder, final #Value("${fcm.server-key}") String serverKey) {
this.restTemplate = builder.additionalInterceptors((request, body, execution) -> {
request.getHeaders().add("Authorization", "key=" + serverKey);
log.debug("Adding authorization header");
return execution.execute(request, body);
}).build();
}