I'm trying to create controller that download large file using RxNetty
I write something stupid like
#RequestMapping(method = RequestMethod.GET, path = "largeFile")
public DeferredResult<ResponseEntity<byte[]>> largeFile() throws IOException {
Observable<ResponseEntity<byte[]>> observable = RxNetty.createHttpGet(URL)
.flatMap(AbstractHttpContentHolder::getContent)
.map(data -> {
byte[] bytes = new byte[data.readableBytes()];
data.readBytes(bytes);
return new ResponseEntity<>(bytes, HttpStatus.OK);
});
DeferredResult<ResponseEntity<byte[]>> deferredResult = new DeferredResult<>();
observable.subscribe(deferredResult::setResult, deferredResult::setErrorResult);
return deferredResult;
}
Nevertheless I have the following error:
Caused by: io.netty.handler.codec.TooLongFrameException: HTTP content length exceeded 1048576 bytes.
The default client in RxNetty 0.4.x aggregates HTTP payload which has a limit on the maximum content length. The exception you see is because of that limit. You can alter the default client using a PipelineConfigurator as shown in this example:
https://github.com/ReactiveX/RxNetty/blob/0.4.x/rxnetty-examples/src/main/java/io/reactivex/netty/examples/http/chunk/HttpChunkClient.java#L49
after which the payload will be chunked into multiple buffers.
Alternatively, if you know the max size, then you can use an appropriate payload aggregator in the configurator.
Related
I am developing a proxy service to a Minio server using WebClient that handles all Minio/S3 API endpoints. Most of them work fine, but I have encountered one case in which the PUT operation seems to get hung up when trying to set the body of the request to either an InputStream, a File, or a Resource pointing to it. (See epilogue at the bottom, as I'm left wondering where the problem really is.)
The only way I've found to make it work is to read the file contents to an in-memory byte array. The following baseline works, for example:
WebClient.UriSpec<WebClient.RequestBodySpec> uriSpec = client.method(request.getMethod());
WebClient.RequestBodySpec bodySpec = uriSpec.uri(uri);
WebClient.RequestHeadersSpec<?> headersSpec = bodySpec;
try {
// read file to byte array; works fine
byte[] bytes = Files.readAllBytes(Path.of(file.get().getFile().toURI()));
// set it to the request body
headersSpec = bodySpec.bodyValue(bytes);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
// manipulate some headers
headersSpec = headersSpec.headers(httpHeaders -> ...);
// execute the request; works fine in this scenario
return headersSpec.exchangeToMono(resp -> ...)
.doOnError(throwable -> log.error("Trouble proxying request: " + throwable.getMessage(), throwable));
However, every alternative that I try to stream this content instead, results in a request that seems to hang in the headersSpec.exchangeToMono invocation. I don't see any errors on the proxy service, and the client socket eventually gives up:
java.net.SocketTimeoutException: timeout
client-tester_1 | at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:143) ~[okio-jvm-2.8.0.jar:na]
client-tester_1 | Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Some examples of failure (or, paraphrasing Edison, I've successfully found at least a dozen ways that do not work):
// Use same byte array as above; Hangs
Resource resource = new ByteArrayResource(bytes);
headersSpec = bodySpec.bodyValue(resource);
// Read an input stream from the file (this one relies on a HttpMessageWriter<InputStream> that I configured on the client); Hangs
InputStream bodyStream = new BufferedInputStream(Files.newInputStream(Path.of(file.get().getFile().toURI())));
headersSpec = bodySpec.bodyValue(bodyStream);
// Resource for the file; Hangs
Resource resource = new FileSystemResource(Path.of(file.get().getFile().toURI()));
Flux<DataBuffer> flux = DataBufferUtils.read(resource, DefaultDataBufferFactory.sharedInstance, 4096);
headersSpec = bodySpec.body(flux, DataBuffer.class);
// Different resource; Hangs
Resource resource = new UrlResource(file.get().getFile().toURI());
headersSpec = bodySpec.bodyValue(resource);
// Try BodyInserters; Hangs
Flux<DataBuffer> flux = DataBufferUtils.read(Path.of(file.get().getFile().toURI()), DefaultDataBufferFactory.sharedInstance, 4096);
headersSpec = bodySpec.body(BodyInserters.fromDataBuffers(flux));
// Yet another attempt; Take a guess...
InputStream bodyStream = new BufferedInputStream(Files.newInputStream(Path.of(file.get().getFile().toURI())));
headersSpec = bodySpec.body(BodyInserters.fromResource(resource));
I'm using recent versions of the relevant libraries:
org.springframework.boot:spring-boot-starter-webflux -> 2.7.5
org.springframework.boot:spring-boot-starter-reactor-netty:2.7.5
org.springframework:spring-core:5.3.23
Epiloge I'm wondering if the problem is not necessarily with Spring/WebClient/Netty -- as many of these code samples were inspired by other examples I've found -- but rather by some nuance on the Minio server?
I'm running on the IBM public cloud. I have apu connect to access the cloud foundry microservice. I've gone through many of the posts and tried various things and I can't seem to get this to work. Here are my property file config settings for spring boot:
# The name of the application
spring.application.name=xxxxx
# web base path
management.endpoints.web.base-path=/
# Embedded tomcat config
server.tomcat.max-swallow-size=256MB
server.tomcat.max-http-post-size=256MB
# File size values
spring.servlet.multipart.max-file-size=256MB
spring.servlet.multipart.max-request-size=256MB
spring.servlet.multipart.enabled=true
# Server specific values
input.server=xxx
input.rtm.bucket=xxx
storage.server.base=xxx
# Cloudant database info
input.events.db.name=xxxx
input.ait.info.db.name=xxxx
letter.number.db.name=xxxx
letter.gen.data.db.name=xxxx
# Query index design documents
query.pad.ait.info.index.name=xxxx
query.pad.ait.info.deisgn.doc=_xxxx
query.rfa.ltr.index.name=xxxx
query.rfa.ltr.design.doc=xxxx
# The logging levels of the application
logging.level.application=DEBUG
#logging.level.root=DEBUG
#logging.level.org.springframework.web=INFO
# Testing
unit.testing=false
integration.testing=true
# Jackson json config
spring.jackson.mapper.accept-case-insensitive-properties=true
Here is the REST api function for POSTing the file
#PostMapping(value = "/send/rtm/document/{errata}")
public #ResponseBody ResponseEntity<Object> receiveRtmDocument(#PathVariable("errata") String errata, #RequestParam("file") MultipartFile file)
I'm using spring boot 2.1.6 and have not updated anything in the POM file. I'm attempting to send a 5.8 MB file to the api and it gives me this error:
com.ibm.tools.cloud.exceptions.DataNotJsonException: <html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>openresty</center>
</body>
</html>
at com.ibm.msc.gasm.sapt.input.AitInputManagement.sendRtmDocument(AitInputManagement.java:182)
at com.ibm.msc.gasm.sapt.test.InputServiceTester.performTest(InputServiceTester.java:142)
at com.ibm.msc.gasm.sapt.test.InputServiceTester.main(InputServiceTester.java:96)
Here is the send code I am using in java for the multipart. The only other headers I use that are not listed here are my authorization headers.
// Create the URL connection
HttpURLConnection conn = (HttpURLConnection) (new URL(requestUri)).openConnection();
if (content != null || multipartFile) conn.setDoOutput(true);
conn.setRequestMethod(method.toString());
// Set the headers
Enumeration<String> keys = headers.keys();
while (keys.hasMoreElements())
{
// Pull out the key
String key = keys.nextElement();
// Set the header
conn.setRequestProperty(key, headers.get(key));
}
// Set the accept header
if (acceptHeader != null) conn.setRequestProperty("Accept", acceptHeader);
// Set the content header
if (contentTypeHeader != null) conn.setRequestProperty("Content-Type", contentTypeHeader);
if (content != null)
{
// Set the content
DataOutputStream dos = new DataOutputStream(conn.getOutputStream());
if (content.isFileContent()) dos.write(content.getFileContentAsByteArray());
else if (content.isByteArrayContent()) dos.write(content.getContentAsByteArray());
else if (content.isStringContent()) dos.write(content.getStringContentAsByteArray());
// close the stream
dos.flush();
dos.close();
}
// Set the multipart file
if (multipartFile)
{
// Set the properties
conn.setUseCaches(false);
conn.setRequestProperty("Connection", "Keep-Alive");
conn.setRequestProperty("Cache-Control", "no-cache");
conn.setRequestProperty("Content-Type", "multipart/form-data;boundry=" + MP_BOUNDRY);
// Set the content
DataOutputStream dos = new DataOutputStream(conn.getOutputStream());
dos.writeBytes(MP_HYPHENS + MP_BOUNDRY + StringUtils.crlf);
dos.writeBytes("Content-Disposition: form-data: name=\"" + this.mpName + "\";filename=\"" + this.mpFileName + "\"" + StringUtils.crlf);
dos.writeBytes(StringUtils.crlf);
dos.write(IOUtils.toByteArray(new FileInputStream(this.mpFileNamePath)));
dos.writeBytes(StringUtils.crlf);
dos.writeBytes(MP_HYPHENS + MP_BOUNDRY + MP_HYPHENS + StringUtils.crlf);
// close the stream
dos.flush();
dos.close();
}
// Get the response
HttpResponseMessage response = null;
try
{
// Extract the stream
InputStream is = (conn.getResponseCode() >= HttpURLConnection.HTTP_BAD_REQUEST) ? conn.getErrorStream() : conn.getInputStream();
// Pull out the information
byte[] data = IOUtils.toByteArray(is);
// Set the response
response = new HttpResponseMessage(requestUri, HttpStatusCode.getType(conn.getResponseCode()), acceptHeader, data, conn.getResponseMessage());
}
catch (Throwable e)
{
throw new IOException(String.format("Error reading results from %s", requestUri), e);
}
// Close the request
conn.disconnect();
// Send request
return response;
I've tried several things, but I am not sure what I am missing. Anyone have any ideas how to fix this?
You need to change NGINX settings;
Add to config file next line
client_max_body_size 20M;
Use the form form to submit the file and accept it with MultipartFile. In this case (the other situation is not clear), the default file size is limited to 2M. If you want to upload a large file, you need to configure the file size.
https://www.cyberciti.biz/faq/linux-unix-bsd-nginx-413-request-entity-too-large/
Try these two in your application.properties
server.tomcat.max-swallow-size=XMB //maximum size of the request body/payload
server.tomcat.max-http-post-size=XMB //maximum size of entire POST request
X is your desired integer representing megabyte.
I have a servlet that returns an image as InputStreamResource. There are approx 50 static images that are to be returned based on some get query parameters.
For not having to look up each of those images every time it is requested (which is very often), I'd like to cache those images responses.
#RestController
public class MyRestController {
//code is just example; may be any number of parameters
#RequestMapping("/{code}")
#Cachable("code.cache")
public ResponseEntity<InputStreamResource> getCodeLogo(#PathVariable("code") String code) {
FileSystemResource file = new FileSystemResource("d:/images/" + code + ".jpg");
return ResponseEntity.ok()
.contentType("image/jpg")
.lastModified(file.lastModified())
.contentLength(file.contentLength())
.body(new InputStreamResource(file.getInputStream()));
}
}
When using the #Cacheable annotation (no matter if directly on the RestMapping method or refactored to an external service), I_'m getting the following exception:
cause: java.lang.IllegalStateException: InputStream has already been read - do not use InputStreamResource if a stream needs to be read multiple times - error: InputStream has already been read - do not use InputStreamResource if a stream needs to be read multiple times
org.springframework.core.io.InputStreamResource.getInputStream(InputStreamResource.java:96)
org.springframework.http.converter.ResourceHttpMessageConverter.writeInternal(ResourceHttpMessageConverter.java:100)
org.springframework.http.converter.ResourceHttpMessageConverter.writeInternal(ResourceHttpMessageConverter.java:47)
org.springframework.http.converter.AbstractHttpMessageConverter.write(AbstractHttpMessageConverter.java:195)
org.springframework.web.servlet.mvc.method.annotation.AbstractMessageConverterMethodProcessor.writeWithMessageConverters(AbstractMessageConverterMethodProcessor.java:238)
org.springframework.web.servlet.mvc.method.annotation.HttpEntityMethodProcessor.handleReturnValue(HttpEntityMethodProcessor.java:183)
org.springframework.web.method.support.HandlerMethodReturnValueHandlerComposite.handleReturnValue(HandlerMethodReturnValueHandlerComposite.java:81)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:126)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:832)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:743)
Question: how can I then cache the ResponseEntity of type InputStreamResource at all?
Cache manager will add to cache ResponseEntity with InputStreamResource inside of it. First time it will be ok. But when cached ResponseEntity will try to read InputStreamResouce second time you'll get exception, because it is unable to read stream more than one time.
Solution: don't cache InputStreamResouce itself, but cache the content of stream.
#RestController
public class MyRestController {
#RequestMapping("/{code}")
#Cachable("code.cache")
public ResponseEntity<byte[]> getCodeLogo(#PathVariable("code") String code) {
FileSystemResource file = new FileSystemResource("d:/images/" + code + ".jpg");
byte [] content = new byte[(int)file.contentLength()];
IOUtils.read(file.getInputStream(), content);
return ResponseEntity.ok()
.contentType(MediaType.IMAGE_JPEG)
.lastModified(file.lastModified())
.contentLength(file.contentLength())
.body(content);
}
}
I've used IOUtils.read() from org.apache.commons.io, to copy bytes from stream to array, but you can do it by any preferred way.
You can't cache Streams. Once they are read, they are gone.
The error message is pretty clear about that:
InputStream has already been read -
do not use InputStreamResource if a stream needs to be read multiple times
By your code and comments, it seems to me that you have a big images folder with JPG logos (which might be added, deleted or modified), and you want to have a daily cache of the one's you're being asked for, so you don't have to constantly reload them from disk.
If that's the case, your best option is to read the File's content to a ByteArray and cache/return that instead.
Environment:
Java client ("google-api-services-storage", "v1-rev33-1.20.0") using JSON API (com.google.api.services.storage.Storage class).
Goal:
Move a large object from "standard" to "nearline" bucket using Java client (file size is 512 MB).
Steps:
Use "rewrite" API method.
Problem:
I'm getting SocketTimeoutException in 20 seconds.
Investigation:
The same code works fine when I use rewrite from "standard" bucket to another "standard" bucket for the same object.
I've also tried the APIs Explorer and created a request to rewrite an object from "standard" to "nearline" bucket. The server responded in about 27 seconds and "totalBytesRewritten" property in the response was for about a half of file size. How to get and handle such response?
Documentation says:
"If the source and destination are different locations and/or storage classes, the rewrite method might require multiple calls."
My code (Java):
final Storage.Objects.Rewrite rewriteRequest = storage.objects().rewrite(
STANDARD_BUCKET_NAME,
SOURCE_OBJECT_PATH,
NEARLINE_BUCKET_NAME,
TARGET_OBJECT_PATH,
null // no metadata overriding
);
rewriteRequest.execute();
Please help.
According to the documentation, if a file was split up into chunks, you are supposed to call rewrite again, using the 'rewriteToken' returned in the first response from rewrite. The operation will resume, doing one more chunk of data. This should be repeated until the response has getDone() == true.
My implementation for the java api:
private void rewriteUntilDone(final String sourceBucket, final String sourceKey,
final String destBucket, final String destKey) throws IOException {
rewriteUntilDone(sourceBucket, sourceKey, destBucket, destKey, null);
}
private void rewriteUntilDone(final String sourceBucket, final String sourceKey,
final String destBucket, final String destKey,
#Nullable final String rewriteToken)
throws IOException {
Storage.Objects.Rewrite rewrite = googleStorage.objects().rewrite(sourceBucket, sourceKey, destBucket, destKey, null);
if (rewriteToken != null) {
rewrite.setRewriteToken(rewriteToken);
}
RewriteResponse rewriteResponse = rewrite.execute();
if (!rewriteResponse.getDone()) {
String rewriteToken2 = rewriteResponse.getRewriteToken();
BigInteger totalBytesRewritten = rewriteResponse.getTotalBytesRewritten();
log.debug("Rewriting not finished, bytes completed: {}. Calling rewrite again with token {}", totalBytesRewritten, rewriteToken2);
rewriteUntilDone(sourceBucket, sourceKey, destBucket, destKey, rewriteToken2);
}
}
EDIT:
Also, you may have to increase the read timeout. It seems that rewrite responds after 27 s but the default timout is 20 s. Wrap your GoogleCredentials to set the read timeout
For example, if the post url is:
http://www.wolf.com/pcap/search?stime={stime}&etime=${etime}&bpf=${bpf}
then can we do this:
Map<String, String> vars = new HashMap<String, String>();
vars.put("bpf", bpf);
...
responseString = restTemplate.postForObject(url, null, String.class,vars);
If bpf is a String, then is there a limitation of the size of bpf? Can it be any size?
Unfortunately the answer is: "It depends".
More precisely: as you append the bpf as a parameter to the URL it does not really matter if you are doing a POST or a GET. Sometimes there are restrictions on the length of an URL a server will handle, but that depends on what the server accepts, and cannot be determined from the RestTemplate, which is the client.
For example if the server you send the REST request to is a tomcat, then the maximal value of the complete header (URL, HTTP-Header etc) is by default 8kB for tomcat 6.0 or higher; see e.g. https://serverfault.com/questions/56691/whats-the-maximum-url-length-in-tomcat
Just in case if you have control over the server side, too, you can change the expected interface by not sending the bpf as parameter, but as request body, like:
Map<String, String> vars = new HashMap<String, String>();
// vars.put("bpf", bpf); <--- not needed
responseString = restTemplate.postForObject(url, bpf, String.class, vars);
(and then of course get the bpf on the server from the request body instead).
Otherwise you are out of luck and have to limit the length of the URL. Maybe use a proxy or network sniffer to see what extra Headers are actually send and subtract that from the 8kB limit to get the maximal length of the URL.