okhttp support for connection pooling in http2 - okhttp

Does okhttp support connection pooling for http2 client ?
The offifical doc https://square.github.io/okhttp/ says this
"Connection pooling reduces request latency (if HTTP/2 isn’t available)".
Does okhttp has any way to configure the max concurrent streams allowed on a connection as per rfc https://www.rfc-editor.org/rfc/rfc7540#section-5.1.2 ? What is the default value for max concurrent streams sent by okhttp http2 client ? What happens when receiver sends refused stream error when max concurrent streams threshold is met ? Is it retried or request fails ?

Does okhttp has any way to configure the max concurrent streams allowed on a connection as per rfc https://www.rfc-editor.org/rfc/rfc7540#section-5.1.2 ?
Not currently.
What is the default value for max concurrent streams sent by okhttp http2 client ?
This is honored.
What happens when receiver sends refused stream error when max concurrent streams threshold is met ? Is it retried or request fails ?
OkHttp will fail the request all the way to the application. You shouldn’t expect to see this though, because OkHttp honors the max concurrent stream limit.
Note that connection creation is racy. This limit may only become known to OkHttp after it has created multiple streams. In such cases the server is supposed to permit these streams.

Related

How to limit number of HTTP Connections for a rest web service

We want to limit the number of connections for our rest web service.
We are using spring boot with jetty as server.
We have configured below settings :
#rate limit connections
server.jetty.acceptors=1
server.jetty.selectors=1
#connection time out in milliseconds
server.connection-timeout=-1
Now, as you can see that there is no idle timeout applicable for connections.
Which means a connection once open will remain active until it is explicitly closed.
So, with this settings, my understanding is that if I open more then 1 connection, then I should not get any response because the connection limit is only 1.
But this does not seem to be working. Response is sent to each request.
I am sending request with 3 different clients. I have verified the ip address and ports. They all are different for 3 clients. But all 3 remains active once connection is established.
Any experts to guide on the same?
Setting the acceptors and selectors to 1 will not limit the max number of connections.
I suggest you look at using either the jetty QoS filter, or the Connection Limit jetty module.

Spring websockets + Amazon MQ limitations

We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.

QPID JMS Heartbeat / Keepalive

is it possible to set a heartbeat or keep-alive for a JMS consumer using QPID JMS? I've found some configuration of QPID which can bet set at the URL like an idleTimeout but I've not found an option to send empty frames for a limited time period.
Regards
The Qpid JMS client allows you to configure how long the idle timeout is which controls when the client will consider the remote to have failed should there be no traffic coming from the remote either in the form of messages or possibly as empty frames in order to keep the connection from idling out. The client will itself respond the the remote peer's requested idle timeout interval by sending as needed an empty frame to ensure that the remote doesn't drop the connection due to inactivity.
If you are seeing drops in connections due to idle timeout on a server then it is likely you have not configured the server to provide an Idle timeout value in the Open performative that it sends to the client.
Reading the specification section on Idle Timeout of a Connection can shed some light on how this works.

Spring Webflux Webclient set Connection keepAlive time

Just starting to use Spring Webflux Webclient,Just wanted to know what is the default KeepAlive time for the Http Connection ? Is there a way to increase the keepAlive Time? In our Rest Service we get a request probably every five minutes,The request takes long time to process .It takes time between 500 seconds-- 10 second. However in load test if I send frequent requests the processing time is less than 250ms.
Spring WebFlux WebClient is an HTTP client API that wraps actual HTTP libraries - so configuration like connection management, timeouts, etc. are configured at the library level directly and behavior might change depending on the chosen library.
The default library with WebClient is Reactor Netty.
Many HTTP clients (and this is the case with Reactor Netty) are maintaining HTTP connections in a connection pool to reuse them. Clients usually acquire a new connection to a remote host, use it to send/receive information and then put it back in the connection pool. This is very useful since sometimes acquiring a new connection can be costly. This seems to be really costly in your case.
HTTP clients leave those unused connections in the pool, but what about keepAlive time?
Most clients leave those connections in the pool as long as possible and test them before acquiring them to see if they're still valid or listen to server events asynchronously to remove them from the pool (I believe Reactor Netty does that). So ultimately, the server is in control and decides when to close connections if they're inactive.
Now your problem description might suggest that connecting to that remote host is very costly, but it could be also the remote host taking a long time to respond to your requests (for example, it might be operating on an empty cache and needs to calculate a lot of things).

What is websocket.Upgrader exactly?

I am trying to learn about websockets, and I am not sure I understand what exactly Upgrader does in gorilla/websockets.
http://www.gorillatoolkit.org/pkg/websocket#Upgrader
Can someone please explain in simple terms what exactly the buffer sizes mean?
The Upgrader.Upgrade method upgrades the HTTP server connection to the WebSocket protocol as described in the WebSocket RFC. A summary of the process is this: The client sends an HTTP request requesting that the server upgrade the connection used for the HTTP request to the WebSocket protocol. The server inspects the request and if all is good the server sends an HTTP response agreeing to upgrade the connection. From that point on, the client and server use the WebSocket protocol over the network connection.
Applications use Upgrader fields to specify options for the upgrade operation.
The WebSocket connection buffers reads and writes to the underlying network connection. The ReadBufferSize and WriteBufferSize specifies the size of these buffers. It's usually best to use the default size by setting ReadBufferSize and WriteBufferSize to zero. Larger buffer sizes take more memory. Smaller buffer sizes can result in more calls to the underlying network connection. The buffer sizes do not limit the size of a message that can be read.

Resources