I'm listening to Eventbus as below and calling HttpClient.postAbs() using vert.x
public void start(Future<Void> fut) {
EventBus eb1 = MainAdminVx.serviceBack.getEventBus();
eb1.consumer("local-message-receiver", message -> {
HttpClient client = vertx.createHttpClient();
client.postAbs("http://external-server-address/test#xyz.com/activityIn?activityId=5", r -> {
r.bodyHandler(b -> System.out.println(b.toString() + r.statusCode() )
).exceptionHandler(t -> System.err.println(t.getMessage()));
})
.putHeader("content-length", "1000")
.putHeader("userId", "test#xyz.com")
.putHeader("Content-Type", "application/json")
.putHeader("Accept", "application/json")
.write("some text")
.exceptionHandler(System.err::println)
.end();
});
}
Is there anything I'm missing? or there is another way to do this... because I'm getting response after 2mins in postAbs and same POST request is working quickly in postman .
Thanks in advance !!!
We just have to set
setChunked(true)
and it works like a charm !!!
As I have set content-length as '1000' , so server will wait until it receives 1000 bytes even though your not sending anything and respond back after. It means the exact byte length of the HTTP body. Generally it is used for HTTP 1.1 so that the receiving party knows when the current response/request has finished, so the connection can be reused for another request.
So what is Chunked transfer :
Chunked transfer encoding is a data transfer mechanism in version 1.1 of the Hypertext Transfer Protocol (HTTP) in which data is sent in a series of "chunks". It uses the Transfer-Encoding HTTP header in place of the Content-Length header, which the earlier version of the protocol would otherwise require. Because the Content-Length header is not used, the sender does not need to know the length of the content before it starts transmitting a response to the receiver. Senders can begin transmitting dynamically-generated content before knowing the total size of that content.
Alternatively, content-length can be omitted and a chunked encoding can be used, or if both are missing, then at the end of the response the connection must be closed.
I hope this will help you to understand Http chunk concept.
Related
When I read the source code of the Golang net/http package, it always mentioned the difference between client requests and server requests. I want to know when these two are generally used, (of course, client requests are well understood and are widely used in websites with front-end and back-end separation architectures)
e.g.
//
// For client requests, a nil body means the request has no
// body, such as a GET request. The HTTP Client's Transport
// is responsible for calling the Close method.
//
// For server requests, the Request Body is always non-nil
// but will return EOF immediately when no body is present.
// The Server will close the request body. The ServeHTTP
// Handler does not need to.
//
// Body must allow Read to be called concurrently with Close.
// In particular, calling Close should unblock a Read waiting
// for input.
Body io.ReadCloser
You can create a Request using http.NewRequest, then set its fields, and call Client.Do to issue the request. In this case, the Request.Body is a variable the client has to set if the request has a body.
When this request is handled on the server side, a new instance of http.Request is created, and that is a server request. For this use, the Body is never nil.
clientt := &http.Client{
Timeout: 30 * time.Second,
}
var tr = &http2.Transport{}
clientt.Transport = tr
I create a client and send http/2 request. with http2 transport
but in DumpRequest I see
GET / HTTP/1.1
Host: www.xxxxq23.com
In response dump I see HTTP/2.0
Why request use HTTP/1.1 ?
How to change to HTTP/2.0
HTTP/2 is binary, instead of textual and dumping in binary would be unreadable and useless. It is intentional by design and it is well documented:
DumpRequest returns the given request in its HTTP/1.x wire
representation. It should only be used by servers to debug client
requests. The returned representation is an approximation only; some
details of the initial request are lost while parsing it into an
http.Request. In particular, the order and case of header field names
are lost. The order of values in multi-valued headers is kept intact.
HTTP/2 requests are dumped in HTTP/1.x form, not in their original
binary representations.
If body is true, DumpRequest also returns the body. To do so, it
consumes req.Body and then replaces it with a new io.ReadCloser that
yields the same bytes. If DumpRequest returns an error, the state of
req is undefined.
You can checkout the implementation details here
Is there a way to do a client.Do("POST", "example.com", body) and read the response headers before the entire response body has been received/closed? This would be similar to how JavaScript XHR requests emit an event that the headers have been received and you can read them before the rest of the request arrives.
What I'm trying to accomplish is making a sort of "smart client" that uses information in the headers from my server to determine what to upload in the request body. So I need to start the request, read the response headers, then start writing to the request body. Because of the nature of my system, I can't split it across separate requests. I believe it's possible at the protocol level, but I'm not sure if go's http libraries support it.
http client Do function doesn't block until whole response body is returned. if you don't want to read full response, why not just use res.Body.Close() after you have examined headers?. i think it should work if you want roughly same behavior. According to Doc.
The response body is streamed on demand as the Body field is read. If the network
connection fails or the server terminates the response, Body.Read calls return an error.
Although DefaultTransport of default http.Client which is http.Transport doesn't give you guarantee that it won't read any byte before you specify.
You can fulfill your requirements by sending an OPTIONS request to the url before sending actual request and read the response header.
The response header will contain all the necessary headers to perform the preferred request.
req, _ := http.NewRequest("OPTIONS", "example.com", nil)
resp, _ := client.Do(req)
I have a question about Spring Reactive WebClient...
Few days ago I decided to play with the new reactive stuff in Spring Framework and I made one small project for scraping data only for personal purposes. (making multiple requests to one webpage and combining the results).
I started using the new reactive WebClient for making requests but the problem I found is that the client not emitting response for every request. Sounds strange. Here is what I did for fetching data:
private Mono<String> fetchData(String uri) {
return this.client
.get()
.uri(uri)
.header("X-Fsign","SW9D1eZo")
.retrieve()
.bodyToMono(String.class)
.timeout(Duration.ofSeconds(35))
.log("category", Level.ALL, SignalType.ON_ERROR, SignalType.ON_COMPLETE, SignalType.CANCEL, SignalType.REQUEST);
}
And the function that calls fetchData:
public Mono<List<Stat>> fetch() {
return fetchData(URL)
.map(this::extractUrls)
.doOnNext(System.out::println)
.doOnNext(s-> System.out.println("all ids are "+s.size()))
.flatMapIterable(q->q)
.map(s -> s.substring(7, 15))
.map(s -> "http://d.flashscore.com/x/feed/d_hh_" + s + "_en_1") // list of N-length urls
.flatMap(this::fetchData)
.map(this::extractHeadToHead)
.collectList();
}
and the subscriber:
FlashScoreService bean = ctx.getBean(FlashScoreService.class);
bean.fetch().subscribe(s->{
System.out.println("finished !!! " + s.size()); //expecting same N-length list size
},Throwable::printStackTrace);
The problem is if I made a little bit more requests > 100.
I didn't get responses for all of them, no error is thrown or error response code is returned and subscribe method is invoked with size different from the number of requests.
The requests I made are based on List of Strings (urls) and after all responses are emitted I should receive all of them as list because I'm using collectList(). When I execute 100 requests, I expect to receive list of 100 responses but actually I'm receiving sometimes 100, sometimes 96 etc ... May be something fails silently.
This is easy reproducible here is my github project link.
Sample output:
all ids are 176
finished !!! 171
Please give me suggestions how I can debug or what I'm doing wrong. Help is appreciated.
Update:
The log shows if I pass 126 urls for example:
onNext(ReactorClientHttpResponse{request=[GET/some_url],status=200}) is called 121 times. May be here is the problem.
onComplete() is called 126 times which is the exact same length of the passed list of urls
but how it's possible some of the requests to be completed without calling onNext() or onError( ) ? (success and error in Mono)
I think the problem is not in the WebClient but somewhere else. Environment or server blocking the request, but may be I should receive some error log.
ps. Thanks for the help !
This is a tricky one. Debugging the actual HTTP frames received, it seems we're really not getting responses for some requests. Debugging a little more with Wireshark, it looks like the remote server is requesting the end of the connection with a FIN, ACK TCP packet and that the client acknowledges it. The problem is this connection is still taken from the pool to send another GET request after the first FIN, ACK TCP packet.
Maybe the remote server is closing connections after they've served a number of requests; in any case it's perfectly legal behavior. Note that I'm not reproducing this consistently.
Workaround
You can disable connection pooling on the client; this will be slower and apparently doesn't trigger this issue. For that, use the following:
this.client = WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(new Consumer<HttpClientOptions.Builder>() {
#Override
public void accept(HttpClientOptions.Builder builder) {
builder.disablePool();
}
}))
.build();
Underlying issue
The root problem is that the HTTP client should not onComplete when the TCP connection is closed without sending a response. Or better, the HTTP client should not reuse a connection while it's being closed. I'll report back here when I'll know more.
I have a function that just makes a get request to check the status code. It does not read anything from the body. Should I still end the function with resp.Body.Close() ?
Callers should close resp.Body when done reading from it. If resp.Body is not closed, the Client's underlying RoundTripper (typically Transport) may not be able to re-use a persistent TCP connection to the server for a subsequent "keep-alive" request.
Yes. When you call http.Get, the function returns a response as soon as all the HTTP headers have been read. The body of the response has not been read yet. The Response.Body is a wrapper around the network connection to the server. When you read from it, it downloads the body of the response.
.Close() tells the system that you're done with the network connection. If you have not read the response body, the default http transport closes the connection. (The transport can only re-use the connection if the body has been read, because if it reused a connection with an unread body the next request made using that connection would receive the previous request's response!)
So reading the Body is often more efficient than simply Close()ing if you're making more than one request - especially with TLS connections which are relatively expensive to create.
If you don't need the body of the response, you should use Head instead of Get. Head doesn't require reading or closing the response body.