I am using a reactor-netty client and setting up http/2 via alpn. When I setup a ReadTimeoutHandler via the .doOnConnected(...) hook, I end up getting this exception:
Jan 09, 2021 6:40:35 PM io.netty.channel.DefaultChannelPipeline onUnhandledInboundException
WARNING: An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.timeout.ReadTimeoutException
This seems benign and the response comes fine, so I'm not really sure what's going on there, but when I start driving my client hard, particularly with gatling, I start to see failed responses via PrematureCloseException, which would not be acceptable in a production environment. This behavior goes away as soon as I remove the ReadTimeoutHandler.
I'm not sure what's going on here and I'm not sure if maybe I'm setting something up incorrectly due to Http/2. As far as I've read and as far as I can tell, this is the recommended way to extend the client and setup the ReadTimeoutHandler. This works perfectly over Http/1.1, I'm not sure what is changing over Http/2.
Here's a recreation of how the client can be setup to reproduce this -- simply use this client to hit your favorite http/2 endpoint.
I am using projectreactor bom version 2020.0.2
public static HttpClient createClient() {
final String[] alpnProtocols =
new String[]{ApplicationProtocolNames.HTTP_2};
// if the application protocol is set to null the netty code will set it to
// JdkDefaultApplicationProtocolNegotiator.INSTANCE (which is package protected)
final ApplicationProtocolConfig APPLICATION_PROTOCOL_CONFIG =
new ApplicationProtocolConfig(
ApplicationProtocolConfig.Protocol.ALPN,
// NO_ADVERTISE is currently the only mode supported by both OpenSsl and JDK providers.
ApplicationProtocolConfig.SelectorFailureBehavior.NO_ADVERTISE,
// ACCEPT is currently the only mode supported by both OpenSsl and JDK providers.
ApplicationProtocolConfig.SelectedListenerFailureBehavior.ACCEPT,
alpnProtocols);
final String[] SUPPORTED_PROTOCOLS =
new String[]{"TLSv1.3"}; // note that http/2 only works with TLSv1.3 or TLSv1.2
// a null for non http/2 will result in the default cipher suites for http1.1
final List<String> CIPHER_SUITES =
Arrays.asList(new String[]{ "TLS_AES_256_GCM_SHA384" });
try {
System.setProperty("io.netty.handler.ssl.noOpenSsl", "false"); // lazy way to set this up
SslContext sslContext = SslContextBuilder
.forClient()
.sslProvider(io.netty.handler.ssl.SslProvider.OPENSSL)
.ciphers(CIPHER_SUITES)
.applicationProtocolConfig(APPLICATION_PROTOCOL_CONFIG)
.trustManager(InsecureTrustManagerFactory.INSTANCE)
.protocols(SUPPORTED_PROTOCOLS)
.build();
return HttpClient.create()
.port(3443)
.baseUrl("https://localhost")
.secure(spec -> spec.sslContext(sslContext))
.protocol(HttpProtocol.H2)
.doOnConnected(con -> con.addHandlerLast(
new ReadTimeoutHandler(4_500, TimeUnit.MILLISECONDS)));
} catch (Exception e) {
throw new RuntimeException(e);
}
PrematureCloseException is what you get when the connection get closed by the remote peer while Gatling is trying to write on it.
It's a perfectly normal situation when reusing a pooled keep-alive connection as network is not instant.
Old versions of Gatling were not handling this case perfectly. If you're using an old version, you should upgrade (latest as of now is 3.5.0).
Related
I would like to set different proxies while the application is running. However, it seems that the first request after setting a new proxy uses the old proxy settings! Subsequent requests use the new proxy settings...
How am I supposed to set a new proxy?
Things I have tried:
1)
Setting the new proxy via different methods:
QNetworkProxy::setApplicationProxy(networkProxy);
or
QNetworkProxyFactory::setApplicationProxyFactory(networkProxyFactory);
// then later:
networkProxyFactory.setNewProxy(hostname, port);
Making sure that the previous page stops loading before setting a new proxy:
QWebEnginePage::triggerAction(QWebEnginePage::Stop, true);
Making sure that the event loop runs between setting the new proxy and sending the request:
//set new proxy via the above methods
QCoreApplication::processEvents();
QCoreApplication::postEvent(..., Qt::LowEventPriority);
// now the event loop can run, in case changing the proxy
// needs to process some events,
// our event is set to low prio,
// so that it does not preemt a possible proxy-events
QWebEnginePage::event(QEvent *e) {
loadRequest();
}
None of these works. The proxies work fine, but a new proxy is only effective after the second request after setting a new proxy.
setInitialProxy(proxy1); // this works from the get-go
app.exec()
setNewProxy(proxy2);
QWebEnginePage::load(requestA); // uses proxy1
QWebEnginePage::load(requestB); // uses proxy2
Is it even possible to change proxies after app.exec(), or is it just some blind luck that it works this much? I would assume, if a browser is built around WebEngine, the user should be able to change the proxy settings...
In my current project we are using aws-lambda to make a rest call to external service and consume the response. Happy path works fine but when it comes to connection-timeout or socket-timeout it is not working as expected. Little more details below
When making a call to external system and if the read-timeout scenario happens (external system connection got established but did not receive any response from the system within 15 sec) the aws lambda keeps waiting for the response till lambda-timeout (25 sec) and returns error.
But I expect the rest-call code invoked within lamda to throw the SocketTimeOutException or related one which is not happening.
Same thing, when I tried using a sample java code (using apache's http-client implementation which is what I have used in lambda) it works perfectly fine and I am getting proper exception thrown.
Initially we tried with jersey implementation for making rest-call and thought this is having issue and changed to http-client implementation, but none of them thrown the exception as it does in sample java code.
Please let me know your suggestions or solutions if faced already.
Below is the code snippet that I use in both lambda as well as sample program for making the rest call. (this whole block is wrapped under try-catch)
HttpPost post = new HttpPost(URL);
RequestJSONObject request = new RequestJSONObject();
//setting required request payload
ObjectMapper mapper = new ObjectMapper();
String jsonStr = mapper.writeValueAsString(request);
post.setEntity(new StringEntity(jsonStr));
post.addHeader("content-type", "application/json");
RequestConfig config = RequestConfig.custom()
.setConnectTimeout(1000)
.setSocketTimeout(3000).build();
CloseableHttpClient httpClient =
HttpClientBuilder.create().setDefaultRequestConfig(config).build();
CloseableHttpResponse response = httpClient.execute(post);
Thanks,
Ganesh Karthik C.
We're using RabbitMQ + StompJS (w/ SockJS & Spring Websocket as middleware, FWIW) to facilitate broadcasting messages over websockets. Everything is working great, except no matter what we try StompJS creates the Queues as non-auto-delete, meaning we end up with TONS of queues.
We're working around it right now with a policy that cleans out inactive queues after several hours, but we'd rather just have auto-delete queues that terminate after all clients disconnect.
We've attempted setting headers auto_delete, auto-delete, autoDelete and every other possible incantation we can find.
If we stop an inspect the frames before they're transmitted (at the lowest possible level in the depths of StompJS's source) we can see those headers being present. However, they don't seem to be making it to RabbitMQ (or it just doesn't look at them on the "SUBSCRIPTION" command??) and creates them as non-auto-delete.
Interestingly, if we create the queue manually beforehand as auto-delete, the StompJS registration calls error out because the requested SUBSCRIBE expected non-auto-delete. This suggests that it's StompJS (or SockJS) that explicitly state non-auto-delete, but we've poured over the source and ruled that out.
So, the million dollar question: how can we have auto-delete queues with StompJS? Please, pretty please, and thanks in advance :)
Example registration
function reg(dest, callback, headers){
stomp.subscribe(dest, callback, headers);
}
function cb(payload){
console.log(JSON.parse(payload.body));
}
reg('/queue/foobar', cb, {});
Setup details
RabbitMQ 3.5.2 and StompJS 2.3.3
** Note **
If I subscribe directly to the exchange (with destinations like /exchange/foo or /topic/foo) the exchange will be defined as auto-delete. It's only queues that aren't auto-delete.
I'm using StompJS/RabbitMQ in production and I'm not seeing this issue. I can't say for sure what your problem is, but I can detail my setup in the hope you might spot some differences that may help.
I'm running against Rabbit MQ 3.0.1.
I'm using SockJS 0.3.4, I seem to recall having some issues using a more recent release from GitHub, but unfortunately I didn't take notes so I'm not sure what the issue was.
I'm using StompJS 2.3.4
For reasons I won't go into here - I've disabled the WebSockets transport, by whitelisting all the other transports.
Here's some simplified code showing how I connect:
var socket = new SockJS(config.stompUrl, null, { protocols_whitelist: ['xdr-streaming', 'xhr-streaming', 'iframe-eventsource', 'iframe-htmlfile', 'xdr-polling', 'xhr-polling', 'iframe-xhr-polling', 'jsonp-polling'] });
var client = Stomp.over(socket);
client.debug = function () { };
client.heartbeat.outgoing = 0;
client.heartbeat.incoming = 0;
client.connect(config.rabbitUsername, config.rabbitPassword, function () {
onConnected();
}, function () {
reconnect(d);
}, '/');
And here's how I disconnect:
// close the socket first, otherwise STOMP throws an error on disconnect
socket.close();
client.disconnect(function () {
isConnected = false;
});
And here's how I subscribe (this happens inside my onConnected function):
client.subscribe('/topic/{routing-key}', function (x) {
var message = JSON.parse(x.body);
// do stuff with message
});
My first recommendation would be to try the specific versions of the client libs I've listed. I had some issues getting these to play nicely - and these versions work for me.
It is possible with RabbitMQ 3.6.0+ by setting auto-delete in subscribe headers to true. Please see https://www.rabbitmq.com/stomp.html#queue-parameters for details.
I use restlet client to send rest request to the server.
public class RestHandler {
protected ClientResource resource = null;
protected Client client = null;
public void connect(final String address,
final Protocol protocol){
final Context context = new Context();
if (client == null) {
logger.info("Create Client.");
client = new Client(context, protocol);
}
resource = new ClientResource(context, new Reference(protocol, address));
resource.setNext(client);
resource.setEntityBuffering(true);
}
}
In its child class, use resource.get()/post/put/delete to send rest request.
I found the response come back so slow at the first time(5-10s).
And then it go faster in the next few requests.
But after waiting about 10min I send the request again, it become slow again.
Is there any way to make the response come back faster?
You can try to use another client connector. It can be the cause of your problem, especially if you use the default one. Notice that the default one should be used for development only.
This page gives you all the available client connectors: http://restlet.com/technical-resources/restlet-framework/guide/2.3/core/base/connectors.
Regarding client connectors, you can configure properties to tune them. To use a client connector, simply put the corresponding Restlet extension within your classpath. Perhaps can you make a try with the extension org.restlet.ext.httpclient.
This answer could help you regarding connector configuration and properties: Restlet HTTP Connection Pool.
Hope it helps you,
Thierry
According to the WebSocketTransformer docs, it says it tries to upgrade HttpRequests according to the RFC6455 web socket standard:
This transformer strives to implement web sockets as specified by RFC6455.
And provides this Dart example code:
HttpServer server;
server.listen((request) {
if (...) {
WebSocketTransformer.upgrade(request).then((websocket) {
...
});
} else {
// Do normal HTTP request processing.
}
});
Now if you search through PhantomJS' issue tracker you can find issue:
11018 Update to final websocket standard
Which basically says that the latest PhantomJS (1.9.7) uses an old web socket standard (I still haven't figured out what version sends out the Sec-WebSocket-Key1 information, but I assume its not the RFC6455 version).
So basically, my problem is that when I run PhantomJS headless browser towards my site that uses Dart 1.3.3, websocket server implementation (basically some upgrade code as I pasted above), it says:
Headers from PhantomJS:
sec-websocket-key1: 327J w6iS/b!43 L2j5}2 2
connection: Upgrade
origin: http://mydomain.com
upgrade: WebSocket
sec-websocket-key2: 42 d 7 64 84622
host: mydomain.com
Dart:
WebSocketTransformer.isUpgradeRequest(request) = false
WebSocketException: Invalid WebSocket upgrade request
The upgrade of the request failed (I assume it because of the mis match of versions).
My question is, until Phantom JS gets updated with 2.0, is there a way I can fix my Dart back-end so it would handle PhantomJS websockets as well?
According to the docs of WebSocketTransformer, the upgrade function has two arguments, one HttpRequest mandatory, and a second optional argument:
static Future<WebSocket> upgrade(HttpRequest request, {Function protocolSelector(List<String> protocols)})
Could this maybe help me some how?
The protocols won't help you. These allow to agree on a special protocol that is used after the handshake for communication. But you can't modify the handshake and the exchanged fields themselves.
What you could do is make a complete own websocket implementation (directly based on Dart HTTP and TCP) that matches the the old implementation that PhantomJS uses. But that won't work with newer clients. By that way you also might be able to make an implementation that supports several versions (by checking the headers when you receive the handshake HTTP request and depending on the handshake forward to another implementation.
You would have to do at least your own WebSocketTransformer implementation. For this you could start by copying Darts interface and implementation and modify it on all places you need (check Licenses). If the actual WebSocket behavior after the handshake is compatible in the two RFCs you could reuse Darts WebSocket class. If this is not the case (other framing, etc.) then you would also have to do your own WebSocket class.
Some pseudo code based on yours:
HttpServer server;
server.listen((request) {
if (...) { // websocket condition
if (request.headers.value("Sec-WebSocket-Key1") != null) {
YourWebSocketTransformer.upgrade(request).then((websocket) {
... // websocket might need to be a different type than Dart's WebSocket
});
}
else {
WebSocketTransformer.upgrade(request).then((websocket) {
...
});
}
}
else {
// Do normal HTTP request processing.
}
});
I don't know your application, but it's probably not worth the effort. Bringing the old websocket implementation into Dart is probably the same effort as bringing the official implementation to PhantomJS. Therefore I think fixing PhantomJS should be preferred.
"No."
HttpRequest.headers is immutable, so you can't massage the request headers into a format that Dart is willing to accept. You can't do any Ruby-style monkey-patching, because Dart does not allow dynamic evaluation.
You can, should you choose a path of insanity, implement a compatible version of WebSockets by handling the raw HttpRequest yourself when you see a request coming in with the expected headers. I believe you can re-implement the WebSocket class if necessary. The source for the WebSocket is here.
Maybe it's possible to do that through inheritance. It's impossible in dart to avoid overriding.
If you have the time and you really need this, you can re-implement some method to patch the websocket for PhatomJS
class MyWebSocket extends WebSocket {
MyWebSocket(/* ... */) : super(/* ... */);
methodYouNeedToOverride(/* ... */) {
super.methodYouNeedToOverride(/* ... */)
// Your patch
}
}
This way will allow you to access to "protected" variable or method, may be useful for a patching
But be careful, WebSocket are just the visible part, all the implementation is in websocket_impl.dart