I'm writing the Excel Workbook created using Apache POI to the response object directly as follows without creating a file:
val outputStream: ByteArrayOutputStream = new ByteArrayOutputStream()
workbook.write(outputStream)
ExcelOk(response.getOutputStream.write(outputStream.toByteArray))
But once the size of the response exceeds 8kB, it starts getting downloaded as zip file in Chrome and as octet-stream in FireFox.
My ExcelOk object looks like this:
object ExcelOk {
def apply(body: Any = Unit, headers: Map[String, String] = ExcelContentType, reason: String = "") = {
halt(ActionResult(responseStatus(200, reason), body, headers ))
}
}
and my ExcelContentType(i.e, response headers) is as below:
val ExcelContentType = Map(
"Access-Control-Allow-Credentials" -> "true",
"Access-Control-Allow-Methods" -> "GET, PUT, POST, DELETE, OPTIONS",
"Access-Control-Allow-Origin" -> "*",
"Access-Control-Max-Age" -> "1728000",
"Content-type" -> "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"Content-disposition" -> "attachment; filename=excel_report.xlsx"
)
I even tried adding "Transfer-Encoding" -> "chunked" to the header list but it doesn't work.
I added this snippet in my web.xml file as well but it didn't help either:
<mime-mapping>
<extension>xlsx</extension>
<mime-type>application/vnd.openxmlformats-officedocument.spreadsheetml.sheet</mime-type>
</mime-mapping>
Any help regarding this would be useful. Note that this behavior is observed only after response size exceeds certain threshold.
You have to set response headers before writing content to response output stream.
response.setHeader("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
response.setHeader("Content-disposition", "attachment; filename=excel_report.xlsx")
workbook.write(response.getOutputStream)
Related
I have a controller proxy api endpoint where it receives different request payloads which are intended to different services. This controller validates payload and adds few headers based on certain rules. In this current context, i do not want to parse the received response from upstream services. proxy method should simply stream response to downstream clients so that it can scale well without going into any memory issues when dealing with large response payloads.
I have implemented method like this:
suspend fun proxyRequest(
url: String,
request: ServerHttpRequest,
customHeaders: HttpHeaders = HttpHeaders.EMPTY,
): ResponseEntity<String>? {
val modifiedReqHeaders = getHeadersWithoutOrigin(request, customHeaders)
val uri = URI.create(url)
val webClient = proxyClient.method(request.method!!)
.uri(uri)
.body(request.body)
modifiedReqHeaders.forEach {
val list = it.value.iterator().asSequence().toList()
val ar: Array<String> = list.toTypedArray()
#Suppress("SpreadOperator")
webClient.header(it.key, *ar)
}
return webClient.exchangeToMono { res ->
res.bodyToMono(String::class.java).map { b -> ResponseEntity.status(res.statusCode()).body(b) }
}.awaitFirstOrNull()
}
But this doesn't seems to be streaming. When i try to download large file, it is complaining failed to hold large data buffer.
Can someone help me in writing reactive streamed approach?
This is what i have done finally.
suspend fun proxyRequest(
url: String,
request: ServerHttpRequest,
response: ServerHttpResponse,
customHeaders: HttpHeaders = HttpHeaders.EMPTY,
): Void? {
val modifiedReqHeaders = getHeadersWithoutOrigin(request, customHeaders)
val uri = URI.create(url)
val webClient = proxyClient.method(request.method!!)
.uri(uri)
.body(request.body)
modifiedReqHeaders.forEach {
val list = it.value.iterator().asSequence().toList()
val ar: Array<String> = list.toTypedArray()
#Suppress("SpreadOperator")
webClient.header(it.key, *ar)
}
val respEntity = webClient
.retrieve()
.toEntityFlux<DataBuffer>()
.awaitSingle()
response.apply {
headers.putAll(respEntity.headers)
statusCode = respEntity.statusCode
}
return response.writeWith(respEntity.body ?: Flux.empty()).awaitFirstOrNull()
}
Let me know if this is truly sending data downstream and flushing?
Your first code snippet fails with memory issues because it is buffering in memory the whole response body as a String and forwards it after. If the response is quite large, you might fill the entire available memory.
The second approach also fails because instead of returning the entire Flux<DataBuffer> (so the entire response as buffers), you're only returning the first one. This fails because the response is incomplete.
Even if you manage to fix this particular issue, there are many other things to pay attention to:
it seems you're not returning the original response headers, effectively changing the response content type
you should not forward all the incoming response headers, as some of them are really up to the server (like transfer encoding)
what happens with security-related request/response headers?
how are you handling tracing and metrics?
You could take a look at the Spring Cloud Gateway project, which handles a lot of those subtleties and let you manipulate requests/responses.
I'm still relatively new to Python and my first time to use aiohttp so I'm hoping someone can help spot where my problem is.
I have a function that does the following:
retrieves from the JSON payload two base64 strings - base64Front and base64Back
decode them, save to "images" folder
send the Front.jpg and Back.jpg to an external API
this external API expects a multipart/form-data
imgDataF = base64.b64decode(base64FrontStr)
frontFilename = 'images/Front.jpg'
with open(frontFilename, 'wb') as frontImgFile:
frontImgFile.write(imgDataF)
imgDataB = base64.b64decode(base64BackStr)
backFilename = 'images/Back.jpg'
with open(backFilename, 'wb') as backImgFile:
backImgFile.write(imgDataB)
headers = {
'Content-Type': 'multipart/form-data',
'AccountAccessKey': 'some-access-key',
'SecretToken': 'some-secret-token'
}
url = 'https://external-api/2.0/AuthenticateDoc'
files = [('file', open('./images/Front.jpg', 'rb')),
('file', open('./images/Back.jpg', 'rb'))]
async with aiohttp.ClientSession() as session:
async with session.post(url, data=files, headers=headers) as resp:
print(resp.status)
print(await resp .json())
The response I'm getting is status code 400 with:
{'ErrorCode': 1040, 'ErrorMessage': 'Malformed/Invalid Request detected'}
If I call the url via Postman and send the two jpg files, I get status code 200.
Hope someone can help here.
Thanks in advance.
Try using FormData to construct your request. Remove the content type from header and use it in FormData field as below:
data = FormData()
data.add_field('file',
open('Front.jpg', 'rb'),
filename='Front.jpg',
content_type='multipart/form-data')
await session.post(url, data=data)
Reference: https://docs.aiohttp.org/en/stable/client_quickstart.html#post-a-multipart-encoded-file
Controller action generates CSV content and returns it with header Content-Disposition: attachment; filename=file.csv
#GetMapping("/csv")
public void csvEmissions(HttpServletResponse response) {
try {
ColumnPositionMappingStrategy<CsvRow> mapStrategy
= new ColumnPositionMappingStrategy<>();
mapStrategy.setType(CsvRow.class);
String[] columns = new String[]{
"col1",
"col2"
};
mapStrategy.setColumnMapping(columns);
CSVWriter csvWriter = new CSVWriter(response.getWriter());
StatefulBeanToCsv<CsvRow> btcsv = new StatefulBeanToCsvBuilder<CsvRow>(response.getWriter())
.withMappingStrategy(mapStrategy)
.withSeparator(',')
.build();
btcsv.write(csvrows());
response.setContentType("text/csv");
response.setHeader("Content-Disposition", "attachment; filename=file.csv");
csvWriter.close();
} catch (IOException | CsvRequiredFieldEmptyException | CsvDataTypeMismatchException e) {
throw new RuntimeException(e.getMessage());
}
}
all works fine when there is not much CsvRow returned by csvrows() method. File is properly downloaded by the browser.
When rows count is larger (let's say over 200) it drops Content-Type and Content-Disposition header and browser prints CSV output as a text in the browser.
Only Transfer-Encoding: chunked header is present in the response.
Any suggestions how to make it downloadable for large amount of data?
Header Content-Length is missing from your response that's why large content is being fetched into chunks and browser displays into tab.
To get the content length try saving csv data into temporary file before putting it to response.
Also set the headers before writing data to response writer.
File file = createTempCSVFile();
response.setContentType("text/csv");
response.setContentLength((int)file.length());
response.setHeader("Content-Disposition", "attachment; filename=file.csv");
// write file data to response.getWriter();
Hope this helps!
I am new to both Spring boot and rest calls.
I am trying to consume a rest service and I do not have any information about that rest API except URL. When I hit that URL from a browser I am getting a response as {key:value}. So, I assumed that it is a JSON response.
I am consuming it in spring boot as follows
restTemplate.getForObject(url, String.class) .
This is giving Invalid mime type "content-type: text/plain; charset=ISO-8859-1": Invalid token character ':' in token "content-type: text"
I assume that this error is because response content type is set to text/plain but it is returning JSON format.
EDIT:
Tried this way but did not work.
HttpHeaders headers = new HttpHeaders();
headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));
HttpEntity<String> entity = new HttpEntity<String>("parameters",headers);
ResponseEntity<String> result = restTemplate.exchange(url,HttpMethod.GET,
entity, String.class);
How to handle and solve it?
You might want to read about the request headers your REST API needs. Content-Type header specifies the media type of the request you're sending to the server. Because you're just getting data from the server you should set the Accept header to the kind of response you want i.e., Accept: application/json.
Unfortunately, you can't set headers using getForObject(). You could try this:
URL url = new URL("Enter the URL of the REST endpoint");
con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("Accept", "application/json");
if (con.getResponseCode() == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
StringBuffer content = new StringBuffer();
String inputLine;
while ((inputLine = in.readLine()) != null) {
content.append(inputLine);
}
in.close();
}
If I try:
url = "https://www.economist.com/news/finance-and-economics/21727073-economists-struggle-work-out-how-much-free-economy-comes-cost"
{:ok, %HTTPoison.Response{status_code: 200, body: body}} = HTTPoison.get(url)
IO.binwrite body
I see garbled text (instead of html) in the console. But if I view source on the webpage, I see html there. What am I doing wrong?
PS: it works fine with a js http client (axios.js), not sure why it doesn't work with httpoison
That URL returns the body in gzipped form and indicates this by sending the header Content-Encoding: gzip. hackney, the library HTTPoison is built on, does not automatically decode this. This feature will likely be added at some point. Until then, you can decode the body yourself using the :zlib module if the Content-Encoding is gzip:
url = "https://www.economist.com/news/finance-and-economics/21727073-economists-struggle-work-out-how-much-free-economy-comes-cost"
{:ok, %HTTPoison.Response{status_code: 200, headers: headers, body: body}} = HTTPoison.get(url)
gzip? = Enum.any?(headers, fn {name, value} ->
# Headers are case-insensitive so we compare their lower case form.
:hackney_bstr.to_lower(name) == "content-encoding" &&
:hackney_bstr.to_lower(value) == "gzip"
end)
body = if gzip?, do: :zlib.gunzip(body), else: body
IO.write body