Allow other files to be uploaded after FileSizeLimitExceededException for just one file - spring-boot

I have setup an application server using springboot which allows upload of multiple files.
I have configured the application.properties using the following:
spring.servlet.multipart.max-file-size=10MB
spring.servlet.multipart.max-request-size=20MB
When running a request using postman,
I am uploading 3 files under the key called "files"
File-1 is 11MB
File-2 is 1MB
File-3 is 1MB
The request fails with the following error:
"message": "Maximum upload size exceeded; nested exception is java.lang.IllegalStateException: org.apache.tomcat.util.http.fileupload.FileUploadBase$FileSizeLimitExceededException: The field files exceeds its maximum permitted size of 10485760 bytes."
I understand this is from providing size limits for the server.
But Is there a way to handle the exception such that only 1 file is excluded from upload, and the others are allowed.
Tried with exception handler to fetch request.
But this handler is applied to the overall request causing all the files to be skipped.
#PostMapping("/file-upload-service/file-uploader")
public List<UploadFileResponse> uploadMultipleFiles(#RequestParam String destination, #RequestParam("files") MultipartFile[] files) throws FileStorageException {
logger.info("Uploading multiple files...");
Date uploadDate = new Date();
logger.info("Upload operation started at {}", uploadDate);
return Arrays.asList(files)
.stream()
.map(file -> uploadFile(destination, file))
.collect(Collectors.toList());
}
}

Related

Spring-boot Api endpoint for uploading file not working after adding 'spring-boot-starter-hateoas' dependency

I have a simple API function to upload a file similar to:
#PostMapping(value = "/documents",
consumes = {MediaType.MULTIPART_FORM_DATA_VALUE})
public Mono<ResponseEntity<String>> uploadDocument(#RequestPart Mono<FilePart> file){
return storeDocumentService
.upload(file)
.map(fileLocation->ResponseEntity.ok(fileLocation))
}
The code works ok and uploads the file. The problem comes when I want to make the response a bit better by returning the link to the uploaded file. For this I want to use HATEOAS 'org.springframework.boot:spring-boot-starter-hateoas'. As soon as I add the dependency 'org.springframework.boot:spring-boot-starter-hateoas' to my 'build.gradle' the endpoint stops working and I get a response:
{
"timestamp": "2023-02-20T04:28:10.620+00:00",
"status": 415,
"error": "Unsupported Media Type",
"path": "/documents"
}
and also I get in the logs:
2023-02-20T05:28:10.618+01:00 WARN 2993 --- [nio-8080-exec-4] .w.s.m.s.DefaultHandlerExceptionResolver : Resolved [org.springframework.web.HttpMediaTypeNotSupportedException: Content-Type 'application/pdf' is not supported]
It is important to point out that I upload a ".pdf" file with a header "Content-Type:multipart/form-data". And most important the only change in the working code and not working code is that i just add the dependency for HATEOAS 'org.springframework.boot:spring-boot-starter-hateoas'
For Uploading File We can easily use the type MultiPartFile , This will handles all the types of files and we can easily retrive the fileInputStream(data) from it.
The following code may helps you!..
#PostMapping("uploadExcelData")
public ResponseEntity<?> uploadExcelData(#RequestParam MultipartFile file) throws IOException {
List<...> dataList = fileHandling.convertFileAsJson(file);
if (!dataList.isEmpty()) {
return ....
} else {
return ResponseEntity.ok("No Records found !!");
}
}
I hope the above code will helps you to handle the File in the Endpoint.

Is it possible to save a group of files with MinIO client in one transaction?

I have a Spring Boot application which stores files on a MinIO server. My application receives groups of files and should save all files per each group or save nothing in a problem group. I use io.minio.MinioClient#putObject for each file in a group. Now my code looks like
fun saveFile(folderName: String, fileName: String, file: ByteArray) {
file.inputStream().use {
minioClient.putObject(folderName, fileName, it, PutObjectOptions(file.size.toLong(), -1))
}
}
fun saveFiles(folderName: String, files: Map<String, ByteArray>) {
try {
files.forEach { (fileName, file) -> saveFile(folderName, fileName, file) }
} catch (e: Exception) {
files.forEach { (fileName, _) -> minioClient.removeObject(folderName, fileName) }
throw e
}
}
I wonder how I could refactor my saveFiles method to make it more transactional.
N.B. There are no rules about reading files by groups - each file could be read individually.
You can try use this S3 feature, MinIO also support this feature.
Create .tar or .zip archive and send to S3 with metadata option snowball-auto-extract=true (header: X-Amz-Meta-Snowball-Auto-Extract), archive will be automatically extracted in S3.
This is not transaction but look very similar for me.

Issues in JSON to XML and Upload to FTP in Ballerina Integrator

I am trying samples given in Ballerina integrator tutorials, while running json to xml and then upload into ftp sample facing the issue:
error org.wso2.ei.b7a.ftp.core.util.BallerinaFTPException.
I knew the reason for this issue but don't know where i have to put that command. Please help me to sort out the issue.
Reason for the issue is: ftp credentials are mentioned in conf file, I put conf file under the root directory but it doesn't refer. Need to give
b7a.config.file=src/upload_to_ftp/resources/ballerina.conf
But I don't know where I have to give this?
Thanks in Advance.
You can add -b7a.config.file when running the generated jar file.
Official documentation :
https://ei.docs.wso2.com/en/latest/ballerina-integrator/develop/running-on-jvm/
However, keeping the ballerina.conf file in the root directory should work. Ballerina looks for the conf file automatically when running. Make sure the conf file is outside the src directory.
For the error that you have mentioned, could you add in logs to see if the json has been converted to xml properly? Since the code is structured in a way that checks if the conversion has occurred, it should print an xml
if (employee is xml) {
var ftpResult = ftp->put(remoteLocation, employee);
if (ftpResult is error) {
log:printError("Error", ftpResult);
response.setJsonPayload({Message: "Error occurred uploading file to FTP.", Resason: ftpResult.reason()});
} else {
response.setJsonPayload({Message: "Employee records uploaded successfully."});
}
} else {
response.setJsonPayload({Message: "Error occurred tranforming json to xml.", Resason: employee.reason()});
}
The if( employee is xml ) part will check if the conversion is successful.
The same applies after the file is sent to the server. If the file hasnt been sent, then the ftpResult would be an error. Basically, if you got the message { Message : "Employee records uploaded successfully" } then all the checks should have passed.
I have passed credentials directly to ftpConfig then its working fine. Conversion happened and converted file has been uploaded into ftp location successfully
ftp:ClientEndpointConfig ftpConfig = {
protocol: ftp:SFTP,
host: "corpsftp.dfaDFDA.com",
port: 22,
secureSocket: {
basicAuth: {
username: "DDFDS",
password: "FADFHYFGJ"
}
}
};
Output
{
"Message": "Employee records uploaded successfully."
}

Spring Cloud Gateway not returning correct Response code given by Downstream service (for file upload)

I have a simple downstream service for file upload. Sample code
#RestController
#RequestMapping("/file")
public class FileController {
#PostMapping("/upload")
public ResponseEntity<?> uploadFile(#RequestParam("file") MultipartFile file,
#RequestParam(value = "delay", required = false, defaultValue = "0") int delay) throws Exception {
System.out.println(String.join(System.getProperty("line.separator"),
"File Name => " + file.getOriginalFilename(),
"File Size => " + file.getSize() + "bytes",
"File Content Type => " + file.getContentType()));
TimeUnit.MILLISECONDS.sleep(delay);
return ResponseEntity.ok(file.getName() + " uploaded");
}
}
and a CustomExceptionHandler that returns BAD_REQUEST if there is a MultipartException:
#Configuration
#ControllerAdvice
public class CustomExceptionHandler {
#ExceptionHandler(MultipartException.class)
public ResponseEntity<String> handleMultipartException(MultipartException ex) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(ex.getMessage());
}
}
The size limit is 10MB in application.yml:
spring:
servlet:
multipart:
max-file-size: 10MB
max-request-size: 10MB
If I upload a large file, it gives me a a 400 status as expected
When I try to hit the same via spring cloud gateway I get the following result:
and the logs shows following:
2019-11-08 00:36:10.797 ERROR 21904 --- [ctor-http-nio-2] a.w.r.e.AbstractErrorWebExceptionHandler : [86e57f7e] 500 Server Error for HTTP POST "/product-service/file/upload"
reactor.netty.http.client.PrematureCloseException: Connection has been closed BEFORE response, while sending request body
Note that the gateway is configured to take in large file size with RequestSize filter set globally to take way more than 10MB.
How can I get the same response code as given by the downstream service?
Also, I check with traditional Zuul, and i get a 500 error too.
For the gateway, for this particular case I know we can use the RequestSize filter and now the gateway will return the error code, but then we have to identify all the routes that expect this beforehand.
Also, other validation in the API, like authorization, etc will have the same the same issue. The response code produced because of these validations will not propagate up.
Sample code spring-cloud-gateway/product-service/eureka - https://github.com/dhananjay12/spring-cloud/tree/master/spring-routing
can you try to go through a non limitation of the volume of the file directly to without going through the getway? try the value -1 for the properties :
properties file of the MS where you want to upload the file
spring.servlet.multipart.max-file-size =-1
spring.servlet.multipart.max-request-size =-1
if it good, it may give a problem with the zuul proxy's ribbon socket size, there are properties informed for this type of situation, the following:
Properties file of the getway :
ribbon.eager-load.enabled=true
hystrix.command.default.execution.timeout.enabled=false
hystrix.command.default.execution.isolation.strategy=THREAD
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=3999996
ribbon.ConnectTimeout=999999
ribbon.ReadTimeout=999999
ribbon.SocketTimeout=999999
zuul.host.socket-timeout-millis=999999
zuul.host.connect-timeout-millis=999999
zuul.sensitiveHeaders=Cookie,Set-Cookie

What is the correct way to dealing with each chunk data in a chunked response using reactor-netty?

I am working with an API server implements a "server-push" feature by using an infinite chunked response. Each chunk in the response represents an message server pushed to client. Each chunk is actually a complete json object. Here is the code I am using as a client receiving the messages server pushed to.
Flux<JSONObject> jsonObjectFlux = client
.post(uriBuilder.expand("/data/long_poll").toString(), request -> {
String pollingRequest = createPollingRequest();
return request
.failOnClientError(false)
.failOnServerError(false)
.addHeader("Authorization", host.getToken())
.addHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.addHeader(HttpHeaders.CONTENT_LENGTH,
String.valueOf(ByteBufUtil.utf8Bytes(pollingRequest)))
.sendString(Mono.just(pollingRequest));
}).flatMapMany(response -> response.receiveContent().map(httpContent -> {
ByteBuf byteBuf = httpContent.content();
String source = new String(ByteBufUtil.getBytes(byteBuf), Charsets.UTF_8);
return new JSONObject(source);
}));
jsonObjectFlux.subscribe(jsonObject -> {
logger.debug("JSON: {}", jsonObject);
});
However I got exception like:
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.json.JSONException: Unterminated string at 846 [character 847 line 1]
Caused by: org.json.JSONException: Unterminated string at 846 [character 847 line 1]
at org.json.JSONTokener.syntaxError(JSONTokener.java:433)
at org.json.JSONTokener.nextString(JSONTokener.java:260)
at org.json.JSONTokener.nextValue(JSONTokener.java:360)
at org.json.JSONObject.<init>(JSONObject.java:214)
at org.json.JSONTokener.nextValue(JSONTokener.java:363)
at org.json.JSONObject.<init>(JSONObject.java:214)
Obviously, I am not getting a whole json data. I am wondering if using response.receiveContent() is the right way to deal with one chunk data.

Resources