Spring Boot 2: Serving Mp4 videos with support for Range - spring

I'm trying to stream Mp4 videos on my website, and support Range so you can easily navigate large videos without having to download the entire thing. Sadly for whatever reason sometimes the entire video up to the point you select is downloaded. It's like Range works half of the time
#GetMapping(value = "/videos/{fileName}", produces = "video/mp4")
public ResponseEntity<FileSystemResource> streamFile(
#PathVariable("fileName") String fileName) throws MalformedURLException {
String clipid = fileName.split("\\.")[0];
final Video video = videoService.getVideo(clipid.trim());
if (video == null) {
return ResponseEntity.status(HttpStatus.NOT_FOUND).body(null);
}
return ResponseEntity.ok().body(new FileSystemResource(video.getFile()));
}
The responses from the server:
First position move, GOOD:
Request: range: bytes=218202112-
Response: Content-Range: bytes 218202112-596696593/596696594
Second attempt to move the time position, Full download:
Request: range: bytes=365658112-
Response: No content Range, only length

So strangely enough, the issue was brought on by Cloudflare's cache system.
To fix it I simply created a page-rule to bypass caching on my video endpoints.

Related

Video - stream slow using s3 content store

Video - stream - this works.. however on larger files it is slow..
How can I improve the content s3 store to deliver the content faster ?
I have tried returning a byte array and copy to a buffer.. all load.. just slow... I am not sure where the bottle neck is coming from..
Optional<File> f = filesRepo.findById(id);
if (f.isPresent()) {
InputStreamResource inputStreamResource = new InputStreamResource(contentStore.getContent(f.get()));
HttpHeaders headers = new HttpHeaders();
headers.setContentLength(f.get().getContentLength());
headers.set("Content-Type", f.get().getMimeType());
return new ResponseEntity<Object>(inputStreamResource, headers, HttpStatus.OK);
}
also get this warning:
Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.

How to upload an image in chunks with client-side streaming gRPC using grpcurl

I have been trying to upload an image in chunks with client side streaming using grpcurl. The service is working without error except that at the server, image data received is 0 bytes.
The command I am using is:
grpcurl -proto image_service.proto -v -d # -plaintext localhost:3010 imageservice.ImageService.UploadImage < out
This link mentions that the chunk data should be base64 encode and so the contents of my out file are:
{"chunk_data": "<base64 encoded image data>"}
This is exactly what I am trying to achieve, but using grpcurl.
Please tell what is wrong in my command and what is the best way to achieve streaming via grpcurl.
I have 2 more questions:
Does gRPC handles the splitting of data into chunks?
How can I first send a meta-data chunk (ImageInfo type) and then the actual image data via grpcurl?
Here is my proto file:
syntax = "proto3";
package imageservice;
import "google/protobuf/wrappers.proto";
option go_package = "...";
service ImageService {
rpc UploadImage(stream UploadImageRequest) returns (UploadImageResponse) {}
}
message UploadImageRequest {
oneof data {
ImageInfo info = 1;
bytes chunk_data = 3;
};
}
message ImageInfo {
string unique_id = 1;
string image_type = 2;
}
message UploadImageResponse {
string url = 1;
}
Interesting question. I've not tried streaming messages with (the excellent) grpcurl.
The documentation does not explain how to do this but this issue shows how to stream using stdin.
I recommend you try it that way first to ensure that works for you.
If it does, then bundling various messages into a file (out) should also work.
Your follow-on questions suggest you're doing this incorrectly.
chunk_data is the result of having split the file into chunks; i.e. each of these base64-encoded strings should be a subset of your overall image file (i.e. a chunk).
your first message should be { "info": "...." }, subsequent messages will be { "chunk_data": "<base64-encoded chunk>" } until EOF.

How to upload byte array to S3 bucket in Java?

In a spring boot application I read an image file from a remote service, which returns byte array and in headers I can check what is file extension:
ResponseEntity<byte[]> result = restTemplate.exchange(url, HttpMethod.GET, entity, byte[].class);
Now I want to put this byte array in a S3 bucket in a folder which I decide during run time, for example folder name can base don current timestamp.
I checked AmazonS3 class, but it doesnt seem to have any such API which can help me?
How can this be done?
As per example from documentation:
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/examples-s3-objects.html#upload-object
// Put Object. here 'bytes' is byte array.
PutObjectResponse response = s3.putObject(PutObjectRequest.builder().bucket(bucketName).key(filePathLocation).build(),RequestBody.fromBytes(bytes));
You can use the MinIO java S3 client. Here you can find the documentation.
The code will look something like the following one:
MinioClient minioClient =
MinioClient.builder()
.endpoint("https://play.min.io")
.credentials("Q3AM3UQ867SPQQA43P2F", "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG")
.build();
StringBuilder builder = new StringBuilder();
for (int i = 0; i < 1000; i++) {
builder.append(
"Sphinx of black quartz, judge my vow: Used by Adobe InDesign to display font samples. ");
builder.append("(29 letters)\n");
builder.append(
"Jackdaws love my big sphinx of quartz: Similarly, used by Windows XP for some fonts. ");
builder.append("(31 letters)\n");
builder.append(
"Pack my box with five dozen liquor jugs: According to Wikipedia, this one is used on ");
builder.append("NASAs Space Shuttle. (32 letters)\n");
builder.append(
"The quick onyx goblin jumps over the lazy dwarf: Flavor text from an Unhinged Magic Card. ");
builder.append("(39 letters)\n");
builder.append(
"How razorback-jumping frogs can level six piqued gymnasts!: Not going to win any brevity ");
builder.append("awards at 49 letters long, but old-time Mac users may recognize it.\n");
builder.append(
"Cozy lummox gives smart squid who asks for job pen: A 41-letter tester sentence for Mac ");
builder.append("computers after System 7.\n");
builder.append(
"A few others we like: Amazingly few discotheques provide jukeboxes; Now fax quiz Jack! my ");
builder.append("brave ghost pled; Watch Jeopardy!, Alex Trebeks fun TV quiz game.\n");
builder.append("---\n");
// Create a InputStream for object upload.
ByteArrayInputStream bais = new ByteArrayInputStream(builder.toString().getBytes("UTF-8"));
// Create object 'my-objectname' in 'my-bucketname' with content from the input stream.
minioClient.putObject(
PutObjectArgs.builder().bucket("my-bucketname").object("my-objectname").stream(
bais, bais.available(), -1)
.build());
bais.close();
System.out.println("my-objectname is uploaded successfully");
The full code can be found here.
Checkout the AWS JAVA SDK:
Here the getting started section:
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/getting-started.html
In order to use in Spring context use the Maven dependency:
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/setup-project-maven.html
Uploading an object to S3 Bucket:
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-objects.html#upload-object
import com.amazonaws.AmazonServiceException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
System.out.format("Uploading %s to S3 bucket %s...\n", file_path, bucket_name);
final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build();
try {
s3.putObject(bucket_name, key_name, new File(file_path));
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);

AWS multipart upload from inputStream has bad offfset

I am using the Java Amazon AWS SDK to perform some multipart uploads from HDFS to S3. My code is the following:
for (int i = startingPart; currentFilePosition < contentLength ; i++)
{
FSDataInputStream inputStream = fs.open(new Path(hdfsFullPath));
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - currentFilePosition));
// Create request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucket).withKey(s3Name)
.withUploadId(currentUploadId)
.withPartNumber(i)
.withFileOffset(currentFilePosition)
.withInputStream(inputStream)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(s3Client.uploadPart(uploadRequest).getPartETag());
currentFilePosition += partSize;
inputStream.close();
lastFilePosition = currentFilePosition;
}
However, the uploaded file is not the same as the original one. More specifically, I am testing on a test file, which has about 20 MB. The parts I upload are 5 MB each. At the end of each 5MB part, I see some extra text, which is always 96 characters long.
Even stranger, if I add something stupid to .withFileOffset(), for example,
.withFileOffset(currentFilePosition-34)
the error stays the same. I was expecting to get other characters, but I am getting the EXACT 96 extra characters as if I hadn't modified the line.
Any ideas what might be wrong?
Thanks,
Serban
I figured it out. This came from a stupid assumption on my part. It turns out, the file offset in ".withFileOffset(...)" tells you the offset where to write in the destination file. It doesn't say anything about the source. By opening and closing the stream repeatedly, I am always writing from the beginning of the file, but to a different offset. The solution is to add a seek statement after opening the stream:
FSDataInputStream inputStream = fs.open(new Path(hdfsFullPath));
inputStream.seek(currentFilePosition);

Stream a HTTP response in Java

I want to write the response on an HTTP request to a File. However I want to stream the response to a physical file without waiting for the entire response to be loaded.
I will actually be making a request to a JHAT server for returning all the Strings from the dump. My browser hangs before the response completes as there are 70k such objects, I wanted to write them to a file so that I can scan through.
thanks in advance,
Read a limited amount of data from the HTTP stream and write it to a file stream. Do this until all data has been handled.
Here is example code demonstrating the principle. In this example I do not deal with any i/o errors. I chose an 8KB buffer to be faster than processing one byte at a time, yet still limiting the amount of data pulled into RAM during each iteration.
final URL url = new URL("http://example.com/");
final InputStream istream = url.openStream();
final OutputStream ostream = new FileOutputStream("/tmp/data.txt");
final byte[] buffer = new byte[1024*8];
while (true) {
final int len = istream.read(buffer);
if (len <= 0) {
break;
}
ostream.write(buffer, 0, len);
}

Resources