File upload progress bar using RestTemplate.postForLocation - spring

I have a Java desktop client application that uploads files to a REST service.
All calls to the REST service are handled using the Spring RestTemplate class.
I'm looking to implement a progress bar and cancel functionality as the files being uploaded can be quite big.
I've been looking for a way to implement this on the web but have had no luck.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
I've even tried overriding the CommonsClientHttpRequestFactory.createRequest() method and implementing my own RequestEntity class with a special writeRequest() method but the same issue occurs (stream is all read before actually sending the post).
Am I looking in the wrong place? Has anyone done something similar.
A lot of the stuff I've read on the web about implementing progress bars talks about staring the upload off and then using separate AJAX requests to poll the web server for progress which seems like an odd way to go about it.
Any help or tips greatly appreciated.

This is an old question but it is still relevant.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
You were on the right track. Additionally, you also needed to disable request body buffering on the RestTemplate's HttpRequestFactory, something like this:
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory();
clientHttpRequestFactory.setBufferRequestBody(false);
RestTemplate restTemplate = new RestTemplate(clientHttpRequestFactory);
Here's a working example for tracking file upload progress with RestTemplate.

There was not much detail about what this app is, or how it works so this response is vague but I believe you can do something like this to track your upload progress.
If this really is a Java Client App (i.e. Not HTML/JavaScript but a java program) and you really are having it upload a file as a stream then you should be able to track your upload progress by counting the bytes in the array being transmitted in the stream buffer and comparing that to the total byte count from the file object.
When you get the file get its size.
Integer totalFile = file.getTotalSpace();
Where ever you are transmitting as a stream you are presumably adding bytes to a output buffer of some kind
byte[] bytesFromSomeFileReader = [whatEverYouAreUsingToReadTheFile];
ByteArrayOutputStream byteStreamToServer = new ByteArrayOutputStream();
Integer bytesTransmitted = 0;
for (byte fileByte : bytesFromSomeFileReader) {
byteStreamToServer.write(fileByte);
//
// Update your progress bar every killo-byte sent.
//
bytesTransmitted++;
if( (bytesTransmitted % 1000) = 0) {
someMethodToUpdateProgressBar();
}
}

Related

How can I send a streamed response using OkHttp's mockwebserver?

The typical flow when returning the contents of file from a server back to the client are to:
1.) Obtain an inputstream to the file
2.) Write chunks of the stream to the open socket
3.) Close the input stream
When using OkHttp's mockwebserver the MockResponse only accepts a Okio buffer. This means we must read the entire input stream contents into the buffer before sending it. This will probably result in an OutOfMemory exception if the file is too large. Is there a way to accomplish the logic flow I outlined above without using a duplex response or should I use another library? Here's how I'm currently sending the file in kotlin:
val inputStream = FileInputStream(file)
val source = inputStream.source()
val buf = Buffer()
buf.writeAll(source.buffer())
source.close()
val response = HTTP_200
response.setHeader("Content-Type", "video/mp4")
response.setBody(buf)
return response
// Dispatch the response, etc...
This is a design limitation of MockWebServer, guaranteeing that there’s no IOExceptions on the serving side. If you have a response that's bigger than you can keep in-memory, MockWebServer is the wrong tool for the job.

Start processing Flux response from server before completion: is it possible?

I have 2 Spring-Boot-Reactive apps, one server and one client; the client calls the server like so:
Flux<Thing> things = thingsApi.listThings(5);
And I want to have this as a list for later use:
// "extractContent" operation takes 1.5s per "thing"
List<String> thingsContent = things.map(ThingConverter::extractContent)
.collect(Collectors.toList())
.block()
On the server side, the endpoint definition looks like this:
#Override
public Mono<ResponseEntity<Flux<Thing>>> listThings(
#NotNull #Valid #RequestParam(value = "nbThings") Integer nbThings,
ServerWebExchange exchange
) {
// "getThings" operation takes 1.5s per "thing"
Flux<Thing> things = thingsService.getThings(nbThings);
return Mono.just(new ResponseEntity<>(things, HttpStatus.OK));
}
The signature comes from the Open-API generated code (Spring-Boot server, reactive mode).
What I observe: the client jumps to things.map immediately but only starts processing the Flux after the server has finished sending all the "things".
What I would like: the server should send the "things" as they are generated so that the client can start processing them as they arrive, effectively halving the processing time.
Is there a way to achieve this? I've found many tutorials online for the server part, but none with a java client. I've heard of server-sent events, but can my goal be achieved using a "classic" Open-API endpoint definition that returns a Flux?
The problem seemed too complex to fit a minimal viable example in the question body; full code available for reference on Github.
EDIT: redirect link to main branch after merge of the proposed solution
I've got it running by changing 2 points:
First: I've changed the content type of the response of your /things endpoint, to:
content:
text/event-stream
Don't forget to change also the default response, else the client will expect the type application/json and will wait for the whole response.
Second point: I've changed the return of ThingsService.getThings to this.getThingsFromExistingStream (the method you comment out)
I pushed my changes to a new branch fix-flux-response on your Github, so you can test them directly.

Stream response from HTTP client with Spring/Project reactor

How to stream response from reactive HTTP client to the controller without having the whole response body in the application memory at any time?
Practically all examples of project reactor client return Mono<T>. As far as I understand reactive streams are about streaming, not loading it all and then sending the response.
Is it possible to return kind of Flux<Byte> to make it possible to transfer big files from some external service to the application client without a need of using a huge amount of RAM memory to store intermediate result?
It should be done naturally by simply returning a Flux<WHATEVER>, where each WHATEVER will be flushed on the network as soon as possible. In such a case, the response uses chunked HTTP encoding, and the bytes from each chunk are discarded once they've been flused to the network.
Another possibility is to upgrade the HTTP response to SSE (Server Sent Events), which can be achieved in WebFlux by setting the Controller method to something like #GetMapping(path = "/stream-flux", produces = MediaType.TEXT_EVENT_STREAM_VALUE) (the produces part is the important one).
I dont think that in your scenario you need to create an event stream because event stream is more used to emit event in real time i think you better do it like this.
#GetMapping(value = "bytes")
public Flux<Byte> getBytes(){
return byteService.getBytes();
}
and you can send it es a stream.
if you still want it as a stream
#GetMapping(value = "bytes",produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<List<Byte>> getBytes(){
return byteService.getBytes();
}

Live-Streaming webcam webm stream (using getUserMedia) by recording chunks with MediaRecorder over WEB API with WebSockets and MediaSource

I'm trying to broadcast a webcam's video to other clients in real-time, but I encounter some problems when viewer's start watching in the middle.
For this purpose, I get the webcam's stream using getUserMedia (and all its siblings).
Then, on a button click, I start recording the stream and send each segment/chunk/whatever you call it to the broadcaster's websocket's backend:
var mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(1000);
mediaRecorder.ondataavailable = function (event) {
uploadVideoSegment(event); //wrap with a blob and call socket.send(...)
}
On the server side (Web API, using Microsoft.Web.WebSockets),
I get the byte[] as is perfectly.
Then I send the byte[] to the Viewers which are currently connected to the Broadcaster, read it on the onmessage event of the socket using a FileReader and append the Uint8Array to the sourceBuffer of the MediaSource which is the src of the HTML5 video element.
When the Viewers get the byte[] from the beginning, specifically, the first 126 bytes which start with the EBMLHeader (0x1A45DFA3) and end with the Cluster's beginning (0x1F43B675), and then the whole bulk of the media - it's being played fine.
The problem occurs when a new viewer joins in the middle and fetches the second chunk and later.
I've been trying to research and get the hands a little dirty with some kinds of ways. I understand that the header is essential (http://www.slideshare.net/mganeko/media-recorder-and-webm), that there's some stuff concerning keyframes and all this stuff but I got confused very quickly.
So far, I tried to write my own simple webm parser in c# (from a reference of node.js project in github - https://github.com/mganeko/wmls). Thus I splitted the header from the first chunk, cached it and tried to send it with each chunk later. Of course it didn't work.
I think that maybe the MediaRecorder is splitting the cluster in the middle as the ondataavailable event is fired (that's because I've noticed the the start fo the second chunk doesn't begin with the Cluster's header).
At this point I got stuck without knowing how to use the parser to get it work.
Then I read about using ffmpeg to convert the webm stream s.t each frame is also a keyframe - Encoding FFMPEG to MPEG-DASH – or WebM with Keyframe Clusters – for MediaSource API (in Chris Nolet's answer).
I tried to use FFMpegConverter (for .Net) using:
var conv = new FFMpegConverter();
var outputStream = new MemoryStream();
var liveMedia = conv.ConvertLiveMedia("webm", outputStream, "webm", new ConvertSettings { VideoCodec = "vp8", CustomOutputArgs = "-g 1" });
liveMedia.Start();
liveMedia.Write(vs.RawByteArr, 0, vs.RawByteArr.Length); //vs.RawByteArr is the byte[] I got from the MediaRecorder
liveMedia.Stop();
byte[] buf = new byte[outputStream.Length];
outputStream.Position = 0;
outputStream.Read(buf, 0, (int)outputStream.Length);
I'm not familiar with FFMPEG so probably I'm not getting in the parameters correctly although in the answer that's what I saw but they kind of wrote it very shortly there.
Of course I encountered here plenty of problems:
When using websockets, the running of the FFMpegConverter simply forced closing the websockets channel. (I'll glad if someone could explain why).
I didn't give up, I wrote everything without websockets using HttpGet (for fetching the segment from the server) and HttpPost (with multipart blobs and all the after-party for posting the recorded chunks) methods and tried to use the FFMpegConverter as mentioned above.
For the first segment it worked BUT outputed a byte[] with half length of the original one (I'll be glad if someone could explain that as well), and for the other chunks it threw an exception (every time not just once) saying the pipe has been ended.
I'm getting lost.
Please help me, anybody. The main 4 questions are:
How can I get played the chunks that follow the first chunk of the MediaRecorder?
(Meanwhile, I just get the sourcebuffer close/end events fired and the sourceBuffer is detached from its parent MediaSource object (causing an exception like the "sourceBuffer has been removed from its parent") due to the fact that the byte[] passed to it is not good - Maybe i'm not using the webm parser I wrote in the correct way to detect important parts in the second chunk (which by the way doesn't start with a cluster - which why I wrote that it seems that the MediaRecorder is cutting the cluster in the middle))
Why does the FFMpeg cause the WebSockets to be closed?
Am I using the FFMpegConverter.ConvertLiveMedia with the correct parameters in order to get a new webm segment with all the information needed in it to get it as a standalone chunk, without being dependent on the former chunks (as Chris Nolet said in his answer in the SO link above)?
Why does the FFMpegConverter throw "the pipe ended" exception?
Any help will be extremely highly appreciated.

Writing to channel in a loop

I have to send a lot of data to I client connected to my server in small blocks.
So, I have something like:
for(;;) {
messageEvent.getChannel().write("Hello World");
}
The problem is that, for some reason, client is receiving dirty data, like Netty buffer is not clear at each iteration, so we got something like "Hello WorldHello".
If I make a little change in my code putting a thread sleep everything works fine:
for(;;) {
messageEvent.getChannel().write("Hello World");
Thread.sleep(1000);
}
As MRAB said, if the server is sending multiple messages on a channel without indicating the end of each message, then client can not always read the messages correctly. By adding sleep time after writing a message, will not solve the root cause of the problem either.
To fix this problem, have to mark the end of each message in a way that other party can identify, if client and server both are using Netty, you can add LengthFieldPrepender and LengthFieldBasedFrameDecoder before your json handlers.
String encodedMsg = new Gson().toJson(
sendToClient,newTypeToken<ArrayList<CoordinateVO>>() {}.getType());
By default, Gson uses html escaping for content, sometime this will lead to wired encoding, you can disable this if required by using a Gson factory
final static GsonBuilder gsonBuilder = new GsonBuilder().disableHtmlEscaping();
....
String encodedMsg = gsonBuilder.create().toJson(object);
In neither case are you sending anything to indicate where one item ends and the next begins, or how long each item is.
In the second case the sleep is getting the channel time out and flush, so the client sees a 'break', which it interprets as the end of the item.
The client should never see this "dirty data". If thats really the case then its a bug. But to be hornest I can't think of anything that could lead to this in netty. As every Channel.write(..) event will be added to a queue which then get written to the client when possible. So every data that is passed in the write(..) method will just get written. There is no "concat" of the data.
Do you maybe have some custom Encoder in the pipeline that buffers the data before sending it to the client ?
It would also help if you could show the complete code that gives this behavoir so we see what handlers are in the pipeline etc.

Resources