I have a .net core 2.1 api application that will download a file from a remote location based on the file name. Here is the code:
static public class FileDownloadAsync
{
static public async Task DownloadFile(string filename)
{
//File name is 1GB.zip for testing
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
using (HttpClient client = new HttpClient())
{
string url = #"http://speedtest.tele2.net/" + filename;
using (HttpResponseMessage response = await client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead))
using (Stream readFrom = await response.Content.ReadAsStreamAsync())
{
string tempFile = $"D:\\Test\\{filename}";
using (Stream writeTo = File.Open(tempFile, FileMode.Create))
{
await readFrom.CopyToAsync(writeTo);
}
}
stopwatch.Stop();
Debug.Print(stopwatch.Elapsed.ToString());
}
}
}
This is working great, it will pull a 1 gig file down in about 50 seconds. Well within the required download time. I have hard coded a test file to download in this code for testing as well as storage location--these values will ultimately come from a config file when moved into production. Here is the API endpoint that calls this function:
[HttpGet("{fileName}")]
public async Task<string> GetFile(string fileName)
{
await FileDownloadAsync.DownloadFile(fileName);
return "Done";
}
So getting the file from a remote location down to the local server is not a problem. I need some help/guidance on re-posting this file to another API. Once the file is downloaded, there is some work done on the file to prepare it for upload (the files are all MP4 files), and once that work is done, I need to post it to another API for more proprietary processing. Here is the API end point data I have:
POST: /batch/requests Allocates resources to start new batch transcription. Use this method to request[work] on the input
audio data. Upon the accepted request, the response provides
information about the associated request ID and processing status.
Headers: Authorization: Authorization token
Accept: application/json
Content-Type: Indicates the audio format. The value must be:
audio/x-wav;codec=pcm;bit=16;rate=8000;channels=1
audio/x-wav;codec=pcm;bit=16;rate=16000;channels=1
audio/x-raw;codec=pcm;bit=16;rate=8000;channels=1
audio/x-raw;codec=pcm;bit=16;rate=16000;channels=1
video/mp4
Content-Length (optional): The size of the input voice file. Not
required if a chunked transfer is used.
Query string parameters (required):
profileId: one of supported (see GET profiles) customerId: the id of
the customer. A string of minimum 1 and up to 250 alphanumeric, dot
(.) and dash (-) characters.
So I will set the Content-Type to video/MP4 for processing. Note that if the input size is not used if a chunked transfer is used.
Right now, I am more concerned with just posting (streaming) the file in a non-chunked format while we await for more information on what they consider "chunking" a file.
So I am looking for help on steaming the file from disk to the endpoint. Everything I am running across for .net core API is creating the API to download the file from a POST like a Razor page or Angular page--I already have that. I just need some help on "re-posting" to another API.
Thanks
Using the HttpClient you open a stream to the file, create a content stream, set the necessary headers and post to the endpoint
Stream file = File.Open(filepath, FileMode.Open);
var content = new StreamContent(file);
content.Headers.ContentType = new MediaTypeHeaderValue("video/MP4");
client.DefaultRequestHeaders.Add("Authorization", "token here");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json");
using (HttpResponseMessage response = await client.PostAsync(url, content)) {
//...
}
Related
I have a Spring boot API and one of the endpoints allows users to upload video's. Now My controller basically takes the file as a MultiPart file and then I store it in a temp folder accessible to tomcat. Once I have it stored on Disk, I then push the video to an S3 bucket.
Now to me anyway, this seems to be less than optimal, Like if I wanted to have a 100 or a 1000 users upload at once it seems really non performant to write the files to disk first.
As a little background I'm storing it on disk with the intention that if there is a issue pushing to S3 I can retry
The below code might show what I'm doing better than the above:
public Video addVideo(#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
this.amazonS3ClientService.uploadFileToS3Bucket(file, title, description));
}
Method for storing Video file:
String fileNameWithExtenstion = awsS3FileName + "." + FilenameUtils.getExtension(multipartFile.getOriginalFilename());
//creating the file in the server (temporarily)
File file = new File(tomcatTempDir + fileNameWithExtenstion);FileOutputStream fos = new FileOutputStream(file);
fos.write(multipartFile.getBytes());
fos.close();PutObjectRequest putObjectRequest = new PutObjectRequest(this.awsS3Bucket, awsS3BucketFolder + UnigueId + "/" + fileNameWithExtenstion, file);
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
specified.amazonS3.putObject(putObjectRequest);
//removing the file created in the server
file.delete();
So my question is....is there a better way in Tomcat to:
A) Take in a file via a controllerB) Push to S3
There is no other way to do it with multipart. The problem with multipart that to properly segement parts from the requst they need sometimes skipped or be repeatable. That is impossible within memory w/o having memory to explode. Therefore, Commons FileUpload caches them on disk after a certain threshold is reached.
Multipart requests are the worst way for that. I highly recommend to use either PUT or POST with content type application/octet-stream. You can take the bare request input stream and pass to HttpClient to stream to your backend server. I did this already 5 years ago and it works for gigabytes. I have posted the solution in the Apache HttpClient mailing list.
There is one possibility how this could work under specific conditions:
All parts are in the correct physical order you want to read
Your write to a backend is fast enough to sustain the read from the front
Consume the root part and then go over to the next physical one, process the request body lazily. JAX-WS RI (Metro) has a very nice handling of multipart requests for XOP/MTOM. Learn from that because you won't be able to make it any better.
Perhaps you can try to direct stream the input stream from your MultipartFile to S3.
Consider the following uploadFileToS3Bucket method:
public PutObjectResult uploadFileToS3Bucket(InputStream input, long size, String title, String description) {
// Indicate the length of the information to avoid the need to compute it by the AWS SDK
// See: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/PutObjectRequest.html#PutObjectRequest-java.lang.String-java.lang.String-java.io.InputStream-com.amazonaws.services.s3.model.ObjectMetadata-
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(size); // rely on Spring implementation. Maybe you probably also can use input.available()
// compute the object name as appropriate
String key = "...";
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, input, objectMetadata
);
// The rest of your code
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
return specified.amazonS3.putObject(putObjectRequest);
}
Of course, you need to provide the service the input stream obtained from the client request associated with the MutipartFile object:
public Video addVideo(
#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
try (InputStream input = file.getInputStream()) {
this.amazonS3ClientService.uploadFileToS3Bucket(input, file.getSize(), title, description));
}
}
Probably you can also play with the getBytes method of MultipartFile and create a ByteArrayInputStream to perform the operation.
In addVideo:
byte[] bytes = file.getBytes();
In uploadFileToS3Bucket:
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(bytes.length);
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, new ByteArrayInputStream(bytes), objectMetadata
);
I would prefer the first solution, but try to determine which option offers you the best performance.
I have a csv data as
FirstName,MiddleName,LastName,ImageLocation
Jack|Michel|Rechards|C:\Image\picture.jpg
Tom|Peter|Kim|C:\Image\picture123.jpg
I'm trying to configure Jmeter to read above data file and pass image as form-data to REST API PUT resource as form-data. API accepts image as ByteBuffer.
In JMeter, multipart/form-data upload is only available for POST but not for PUT resource.
For Image, I have written a code in BeanShell PreProcessor that puts byte[] in a variable
String imageLoc = vars.get("ImageLocation");
File file = new File(imageLoc);
byte[] buffer = new byte[(int) file.length()];
InputStream ios = null;
try {
ios = new FileInputStream(file);
if (ios.read(buffer) == -1) {
throw new IOException(
"EOF reached while trying to read the whole file");
}
} finally {
try {
if (ios != null)
ios.close();
} catch (IOException e) {
}
}
vars.put("imageData", new String(buffer));
and the variable imageData is passed in HTTP request body data as
------=_parttest
Content-Type: image/jpeg; name=test.jpeg
Content-Transfer-Encoding: binary
Content-Disposition: form-data; name="Picture"; filename="test.jpeg"
${imageData}
------=_parttest--
For some reason, images are not rendered correctly for requests from Jmeter. If I make similar postman PUT request to API to save image and make another GET request to read image, it's read successfully.
Either I have not configured my test correctly(beanShell code issue or HTTP Request body data issue) or there is a better way to configure this test for reading images from path in data file and pass image as form-data to API PUT resource.
Looking forward to experts advice.
It is actually possible to perform a multipart PUT request using JMeter, you will need to:
Add a HTTP Header Manager and configure it to send Content-Type header with the value of:
multipart/related; boundary=parttest
Construct your request body in the HTTP Request using the same boundary value as in the Content-Type header
See Testing REST API File Uploads in JMeter for example test plan which updates a document in Google Drive using PUT request, you can use it as a reference.
Also I would rather suggest using __FileToString() function in order to read the image file contents or if you prefer scripting - go for JSR223 PreProcessor and Groovy language instead.
I am trying to download a 1GB file from blob storage into the client. I used before Memory Stream and I get OutOfMemory exception.
now I am trying to open a read stream from the blob and send it directly to the client.
[HttpGet]
[ResponseType(typeof(HttpResponseMessage))]
public async Task<HttpResponseMessage> DownloadAsync(string file)
{
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = await blob.OpenReadAsync("container", file);
result.Content = new StreamContent(stream);
return result;
}
The file is downloaded correctly, but the problem is: The code download the complete stream in the client, then the client sees the downloaded file.
I wanted the client to see the file as being downloaded, so the user knows that he is downloading something. Not just blocking the request and wait till it finished.
I am using FileSaver in Angular2:
this.controller.download('data.zip').subscribe(
data => {
FileSaver.saveAs(data, 'data.zip');
});
Has anybody an idea how to fix it?
Thank you!
To fix it you'd need to use the following javascript code instead:
var fileUri = "http://localhost:56676/api/blobfile"; //replace with your web api endpoint
var link = document.createElement('a');
document.body.appendChild(link);
link.href = fileUri;
link.click();
And then in your backend, make it like so:
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = await blob.OpenReadAsync("container", file);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = "data.zip"
};
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
I had the same problem.
The Solution I sorted out was -
First thing, the expected behaviour can occur only when client tries to download the file from blob and usually I prefer downloading the file from the client itself.
As in your case, try to get file blob uri and do some operations as below to open file in browser using Angular Router or simply window.location.href.
window.location.href = “https://*/filename.xlsx”
This worked for me.
I am using loopback Storage component REST API in Xamarin to finish a file uploading job. However, it does not work and does not return any exceptions to me.
Here is my code:
library using: RestSharp.portable
public async Task addFiles(string name, byte[] file)
{
try
{
var client = new RestClient(App.StrongLoopAPI);
var request = new RestRequest("containers/container1/upload", HttpMethod.Post);
request.AddHeader("cache-control", "no-cache");
request.AddHeader("content-type", "multipart/form-data");
request.AddFile("file", file, name + ".jpg", System.Net.Http.Headers.MediaTypeHeaderValue.Parse("multipart/form-data"));
var res = await client.Execute(request);
}
catch (Exception ex)
{
//return null;
}
}
Does my function have any problems?
You're setting the Content Type (Mime Type) wrong.
AddFile accepts as the last parameter the Content Type (for example image/jpeg for a JPG image), where you're using multipart/form-data.
There are different ways to figure out the Content Type of a file, see here:
Get MIME type from filename extension
This should fix your issue.
Actually my question is short.
How can I get a HttpPostedFile from a ASP.NET Web API POST or PUT?
I did see that I can get various information from the Request like Request.Header, Request.Content, Request.Properties. Where in there can I find the file I passed and how can I create a HttpPostedFile from it?
Thanks in advance!
Check out the great article from Henrik Nielsen to post multi-part content (i.e posting a form with file)
UPDATE: Add simple code for a controller to receive a file without multipart content
If you only need your controller to receive a file (i.e. no multipart content), you could do something like the above. The request only contains the file binary and the filename is passed within the URL.
public Task<HttpResponseMessage> Post([FromUri]string filename)
{
Guid uploadedFile = Guid.NewGuid();
Task<HttpResponseMessage> task = Request.Content.ReadAsStreamAsync().ContinueWith<HttpResponseMessage>(t =>
{
if (t.IsFaulted || t.IsCanceled)
throw new HttpResponseException(HttpStatusCode.InternalServerError);
try
{
using (Stream stream = t.Result)
{
//TODO: Write the stream to file system / db as you need
}
}
catch (Exception e)
{
Object o = e;
return Request.CreateResponse(HttpStatusCode.InternalServerError, e.GetBaseException().Message);
}
return Request.CreateResponse(HttpStatusCode.Created, uploadedFile.ToString());
});
return task;
}
Your short question does not have a short answer I am afraid.
ASP.NET Web API exposes you to the wonders of HTTP while ASP.NET MVC abstracted some of it - in this case for HttpPostedFile.
So a bit of background:
HTTP posts where a file is involved usually has multipart form data content. This means that you are mixing different kind of content-type: your normal form data will be sent using formurlencoded while the files will be sent application/octent-stream.
So in Web API, all you have to do is to say
var contents = message.Content.ReadAsMultipartAsync(); // message is HttpRequestMessage
One of the contents will contain your file.