Is it possible to use http4k to stream long reponses? - http4k

I would like to use http4k to stream a long response. I plan to use Content-type: multipart/x-mixed-replace so I push data to the client quite endlessly. In http4k, we have typealias HttpHandler = (Request) -> Response. But my handler cannot return a response because it is not a limited document that I want to return but an endless stream. Is this means that I should use something else for what I want?

If you're pulling from another HTTP source, you can use the streaming body mode on one of the various HTTP client modules (Apache/OkHttp/Jetty will work).
Alternatively if you're generating the content yourself or streaming from a database, you'll have to start a Thread and handle it that way. There's an example of how to do this in the source code in a test case that is used to prove the various clients can do streaming.
https://github.com/http4k/http4k/blob/master/http4k-core/src/test/kotlin/org/http4k/streaming/StreamingContract.kt

Could be that websocket is what you need?
https://www.http4k.org/blog/typesafe_websockets/
So you can have an endless stream of event (e.g you need to push a feed).

Related

Start processing Flux response from server before completion: is it possible?

I have 2 Spring-Boot-Reactive apps, one server and one client; the client calls the server like so:
Flux<Thing> things = thingsApi.listThings(5);
And I want to have this as a list for later use:
// "extractContent" operation takes 1.5s per "thing"
List<String> thingsContent = things.map(ThingConverter::extractContent)
.collect(Collectors.toList())
.block()
On the server side, the endpoint definition looks like this:
#Override
public Mono<ResponseEntity<Flux<Thing>>> listThings(
#NotNull #Valid #RequestParam(value = "nbThings") Integer nbThings,
ServerWebExchange exchange
) {
// "getThings" operation takes 1.5s per "thing"
Flux<Thing> things = thingsService.getThings(nbThings);
return Mono.just(new ResponseEntity<>(things, HttpStatus.OK));
}
The signature comes from the Open-API generated code (Spring-Boot server, reactive mode).
What I observe: the client jumps to things.map immediately but only starts processing the Flux after the server has finished sending all the "things".
What I would like: the server should send the "things" as they are generated so that the client can start processing them as they arrive, effectively halving the processing time.
Is there a way to achieve this? I've found many tutorials online for the server part, but none with a java client. I've heard of server-sent events, but can my goal be achieved using a "classic" Open-API endpoint definition that returns a Flux?
The problem seemed too complex to fit a minimal viable example in the question body; full code available for reference on Github.
EDIT: redirect link to main branch after merge of the proposed solution
I've got it running by changing 2 points:
First: I've changed the content type of the response of your /things endpoint, to:
content:
text/event-stream
Don't forget to change also the default response, else the client will expect the type application/json and will wait for the whole response.
Second point: I've changed the return of ThingsService.getThings to this.getThingsFromExistingStream (the method you comment out)
I pushed my changes to a new branch fix-flux-response on your Github, so you can test them directly.

Stream response from HTTP client with Spring/Project reactor

How to stream response from reactive HTTP client to the controller without having the whole response body in the application memory at any time?
Practically all examples of project reactor client return Mono<T>. As far as I understand reactive streams are about streaming, not loading it all and then sending the response.
Is it possible to return kind of Flux<Byte> to make it possible to transfer big files from some external service to the application client without a need of using a huge amount of RAM memory to store intermediate result?
It should be done naturally by simply returning a Flux<WHATEVER>, where each WHATEVER will be flushed on the network as soon as possible. In such a case, the response uses chunked HTTP encoding, and the bytes from each chunk are discarded once they've been flused to the network.
Another possibility is to upgrade the HTTP response to SSE (Server Sent Events), which can be achieved in WebFlux by setting the Controller method to something like #GetMapping(path = "/stream-flux", produces = MediaType.TEXT_EVENT_STREAM_VALUE) (the produces part is the important one).
I dont think that in your scenario you need to create an event stream because event stream is more used to emit event in real time i think you better do it like this.
#GetMapping(value = "bytes")
public Flux<Byte> getBytes(){
return byteService.getBytes();
}
and you can send it es a stream.
if you still want it as a stream
#GetMapping(value = "bytes",produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<List<Byte>> getBytes(){
return byteService.getBytes();
}

Write data come from concurrent requests to Kafka

I'm building an event collector, it will receive a http request like http://collector.me/?uuid=abc123&product=D3F4&metric=view then write request parameters to Apache Kafka topic, now I use Plug, Cowboy and KafkaEx.
defmodule Collector.Router do
import Plug.Conn
def init(opts) do
opts
end
def call(conn, _opts) do
conn = fetch_query_params(conn)
KafkaEx.produce("test", 0, "#{inspect conn.query_params}")
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "OK")
end
end
AFAIK, Cowboy spawns a new process for each request, so I think write to Kafka in the call function is a proper way because it's easy to create hundreds of thousands of processes in Elixir. But I wonder if this is the right way to do? Do I need a queue before write to Kafka or something like that? My goal is handle as much concurrent requests as possible.
Thanks.
Consider using the Confluent Kafka REST Proxy because then you might not need to write any server side code.
https://github.com/confluentinc/kafka-rest
Worst case is you might need to rewrite the incoming URL into a properly formatted HTTP POST with JSON data and the right HTTP header for Content-Type. This can be done with and application load balancer or a basic reverse Proxy like haproxy or nginx.

File upload progress bar using RestTemplate.postForLocation

I have a Java desktop client application that uploads files to a REST service.
All calls to the REST service are handled using the Spring RestTemplate class.
I'm looking to implement a progress bar and cancel functionality as the files being uploaded can be quite big.
I've been looking for a way to implement this on the web but have had no luck.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
I've even tried overriding the CommonsClientHttpRequestFactory.createRequest() method and implementing my own RequestEntity class with a special writeRequest() method but the same issue occurs (stream is all read before actually sending the post).
Am I looking in the wrong place? Has anyone done something similar.
A lot of the stuff I've read on the web about implementing progress bars talks about staring the upload off and then using separate AJAX requests to poll the web server for progress which seems like an odd way to go about it.
Any help or tips greatly appreciated.
This is an old question but it is still relevant.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
You were on the right track. Additionally, you also needed to disable request body buffering on the RestTemplate's HttpRequestFactory, something like this:
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory();
clientHttpRequestFactory.setBufferRequestBody(false);
RestTemplate restTemplate = new RestTemplate(clientHttpRequestFactory);
Here's a working example for tracking file upload progress with RestTemplate.
There was not much detail about what this app is, or how it works so this response is vague but I believe you can do something like this to track your upload progress.
If this really is a Java Client App (i.e. Not HTML/JavaScript but a java program) and you really are having it upload a file as a stream then you should be able to track your upload progress by counting the bytes in the array being transmitted in the stream buffer and comparing that to the total byte count from the file object.
When you get the file get its size.
Integer totalFile = file.getTotalSpace();
Where ever you are transmitting as a stream you are presumably adding bytes to a output buffer of some kind
byte[] bytesFromSomeFileReader = [whatEverYouAreUsingToReadTheFile];
ByteArrayOutputStream byteStreamToServer = new ByteArrayOutputStream();
Integer bytesTransmitted = 0;
for (byte fileByte : bytesFromSomeFileReader) {
byteStreamToServer.write(fileByte);
//
// Update your progress bar every killo-byte sent.
//
bytesTransmitted++;
if( (bytesTransmitted % 1000) = 0) {
someMethodToUpdateProgressBar();
}
}

Block TCP-send till ACK returned

I am programming a client application sending TCP/IP packets to a server. Because of timeout issues I want to start a timer as soon as the ACK-Package is returned (so there can be no timeout while the package has not reached the server). I want to use the winapi.
Setting the Socket to blocking mode doesn't help, because the send command returns as soon as the data is written into the buffer (if I am not mistaken). Is there a way to block send till the ACK was returned, or is there any other way to do this without writing my own TCP-implementation?
Regards
It sounds like you want to do the minimum implementation to achieve your goal. In this case you should set your socket to blocking, and following the send which blocks until all data is sent, you call recv which in turn will block until the ACK packet is received or the server end closes or aborts the connection.
If you wanted to go further with your implementation you'd have to structure your client application in such a way that supports asynchronous communication. There are a few techniques with varying degrees of complexity; polling using select() simple, event model using WSASelectEvent/WSAWaitForMultipleEvents challenging, and the IOCompletionPort model which is very complicated.
peudocode... Will wait until ack is recevied, after which time you can call whatever functionallity you want -i chose some made up function send_data.. which would then send information over the socket after receiving the ack.
data = ''
while True
readable, writable, errors = select([socket])
if socket in readble
data += recv(socket)
if is_ack(data)
timer.start() #not sure why you want this
break
send_data(socket)

Resources