realCall.cancel appears that it 'could' close my connection - okhttp

For some reason, at 27 requests per second we start to see issues sometimes with okhttp and we noticed a 5 request per host limit. We are talking to an api that sometimes is flaky/times out, etc.
I noticed that we are not cancelling requests on timeout and they seem to be in flight still. (ie. I want to start using RealCall.cancel)
in looking into this though RealCall.cancel calls engine.cancel() if the engine is not null which calls streamAllocation.cancel() which calls the following code...
public void cancel() {
HttpStream streamToCancel;
RealConnection connectionToCancel;
synchronized (connectionPool) {
canceled = true;
streamToCancel = stream;
connectionToCancel = connection;
}
if (streamToCancel != null) {
streamToCancel.cancel();
} else if (connectionToCancel != null) {
connectionToCancel.cancel();
}
}
This looks extremely scary as I just wanted to cancel the one request not the entire connection. ie. just the stream http2 stream maybe but I definitely want the connection alive(I think).
thanks,
Dean

If you cancel before there is a stream, such as during the TLS handshake, canceling will cancel the entire connection. Once you have a stream canceling only cancels the stream.

Related

Connection time out in jpos client

I am using jpos client (In one of the class of java Spring MVC Program) to connect the ISO8585 based server, however due to some reason server is not able to respond back, due to which my program keeps waiting for the response and results in hanging my program. So what is the proper way to implement connection timeout?
My client program look like this:
public FieldsModal sendFundTransfer(FieldsModal field){
try {
JposLogger logger = new JposLogger(ISO_LOG_LOCATION);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(ISO_PACKAGER);
ISOChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
logger.jposlogconfig(channel);
channel.connect();
log4j.info("Connection established using PostChannel");
ISOMsg m = new ISOMsg();
m.set(0, field.getMti());
//m.set(2, field.getField2());
m.set(3, field.getField3());
m.set(4, field.getField4());
m.set(11, field.getField11());
m.set(12, field.getField12());
m.set(17, field.getField17());
m.set(24, field.getField24());
m.set(32, field.getField32());
m.set(34, field.getField34());
m.set(41, field.getField41());
m.set(43, field.getField43());
m.set(46, field.getField46());
m.set(49, field.getField49());
m.set(102,field.getField102());
m.set(103,field.getField103());
m.set(123, field.getField123());
m.set(125, field.getField125());
m.set(126, field.getField126());
m.set(127, field.getField127());
m.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(r.pack()));
channel.disconnect();
}catch (Exception err) {
System.out.println("sendFundTransfer : " + err);
}
return field;
}
Well the real proper way would be to use Q2. Given you don't need a persistent connection you coud just set a timeout for the channel.
PostChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
channel.setTimeout(timeout); //timeout in millies.
This way channel will autodisconnect if nothing happens during the time specified by timeout , and your call to receive will throw an exception.
The alternative is using Q2 and a mux (see QMUX, for which you need to run Q2, or ISOMUX which is kind of deprecated).

Blocking tcp packet receiving in Netty 4.x

How can I block netty to send ACK responese to client in netty 4.x ?
I'm trying to control TCP packet receive speed in netty in order to forward these packet to another server . Netty receive all client packets immediately ,but netty need more time send them out , so client think it finished after sending to netty .
So , I want to know how to block received packets when netty forwarding packets which are received before to another server .
Not sure to really understand your question. So I try to reformulate:
I suppose that your Netty server is acting as a Proxy between clients and another server.
I suppose that what you want to do is to send the ack back to the client only once you really send the forwarded packet to the final server (not necesseraly received by the final server, but at least send by Netty proxy).
If so, then you should use the future of the forwarded packet to respond back with the ack, such as (pseudo code):
channelOrCtxToFinalServer.writeAndFlush(packetToForward).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) {
// Perform Ack write back
ctxOfClientChannel.writeAndFlush(AckPacket);
}
});
where:
channelOrCtxToFinalServer is one of ChannelHandlerContext or Channel connected to your remote final server from your Netty proxy,
and ctxOfClientChannel is the current ChannelHandlerContext from your Netty handler that receive the packet from the client in public void channelRead(ChannelHandlerContext ctxOfClientChannel, Object packetToForward) method.
EDIT:
For the big file transfer issue, you can have a look at the Proxy example here.
In particular, pay attention on the following:
Using the same logic, pay attention on receiving data one by one from client:
yourServerBootstrap..childOption(ChannelOption.AUTO_READ, false);
// Allow to control one by one the speed of reception of client's packets
In your frontend handler:
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
}
And finally add, using the very same logic, the final ack to your client (ack depending of course on your protocol): (see here and here)
/**
* Closes the specified channel after all queued write requests are flushed.
*/
static void closeOnFlush(Channel ch) {
if (ch.isActive()) {
ch.writeAndFlush(AckPacket).addListener(ChannelFutureListener.CLOSE);
}
}

Dropping a connection using Fiddler

I'm trying to simulate a connection drop using fiddler to block a request completely, I used the auto responder and *drop but my application completed and didn't block the request, so I tried the following fiddler script but also got the same behavior, Can anyone help?:
static function OnBeforeRequest(oSession: Session)
{
if (oSession.uriContains("/my uri/")) {
oSession.oRequest.pipeClient.End();
oSession.utilCreateResponseAndBypassServer();
oSession.oResponse.headers.HTTPResponseCode = 0;
oSession.oResponse.headers.HTTPResponseStatus = "0 Client Connection Dropped by script";
oSession.state = SessionStates.Aborted;
return;
}
}
You haven't provided enough information; what specifically does "completed and didn't block the request" mean? Are you sure that your rule even matched?
For what it's worth, uriContains("/my uri/") will NEVER be true; URIs never contain unescaped spaces. It should be e.g. uriContains("/my%20uri/").

Async sends in .NET ActiveMQ

I'm looking to increase the performance of a high-throughput producer that I'm writing against ActiveMQ, and according to this useAsyncSend will:
Forces the use of Async Sends which adds a massive performance boost;
but means that the send() method will return immediately whether the
message has been sent or not which could lead to message loss.
However I can't see it making any difference to my simple test case.
Using this very basic application:
const string QueueName = "....";
const string Uri = "....";
static readonly Stopwatch TotalRuntime = new Stopwatch();
static void Main(string[] args)
{
TotalRuntime.Start();
SendMessage();
Console.ReadLine();
}
static void SendMessage()
{
var session = CreateSession();
var destination = session.GetQueue(QueueName);
var producer = session.CreateProducer(destination);
Console.WriteLine("Ready to send 700 messages");
Console.ReadLine();
var body = new byte[600*1024];
Parallel.For(0, 700, i => SendMessage(producer, i, body, session));
}
static void SendMessage(IMessageProducer producer, int i, byte[] body, ISession session)
{
var message = session.CreateBytesMessage(body);
var sw = new Stopwatch();
sw.Start();
producer.Send(message);
sw.Stop();
Console.WriteLine("Running for {0}ms: Sent message {1} blocked for {2}ms",
TotalRuntime.ElapsedMilliseconds,
i,
sw.ElapsedMilliseconds);
}
static ISession CreateSession()
{
var connectionFactory = new ConnectionFactory(Uri)
{
AsyncSend = true,
CopyMessageOnSend = false
};
var connection = connectionFactory.CreateConnection();
connection.Start();
var session = connection.CreateSession(AcknowledgementMode.AutoAcknowledge);
return session;
}
I get the following output:
Ready to send 700 messages
Running for 2430ms: Sent message 696 blocked for 12ms
Running for 4275ms: Sent message 348 blocked for 1858ms
Running for 5106ms: Sent message 609 blocked for 2689ms
Running for 5924ms: Sent message 1 blocked for 2535ms
Running for 6749ms: Sent message 88 blocked for 1860ms
Running for 7537ms: Sent message 610 blocked for 2429ms
Running for 8340ms: Sent message 175 blocked for 2451ms
Running for 9163ms: Sent message 89 blocked for 2413ms
.....
Which shows that each message takes about 800ms to send and the call to session.Send() blocks for about two and a half seconds. Even though the documentation says that
"send() method will return immediately"
Also these number are basically the same if I either change the parallel for to a normal for loop or change the AsyncSend = true to AlwaysSyncSend = true so I don't believe that the async switch is working at all...
Can anyone see what I'm missing here to make the send asynchronous?
After further testing:
According to ANTS performance profiler that vast majority of the runtime is being spent waiting for synchronization. It appears that the issue is that the various transport classes block internally through monitors. In particular I seem to get hung up on the MutexTransport's OneWay method which only allows one thread to access it at a time.
It looks as though the call to Send will block until the previous message has completed, this explains why my output shows that the first message blocked for 12ms, while the next took 1858ms. I can have multiple transports by implementing a connection-per-message pattern which improves matters and makes the message sends work in parallel, but greatly increases the time to send a single message, and uses up so many resources that it doesn't seem like the right solution.
I've retested all of this with 1.5.6 and haven't seen any difference.
As always the best thing to do is update to the latest version (1.5.6 at the time of this writing). A send can block if the broker has producer flow control enabled and you've reached a queue size limit although with async send this shouldn't happen unless you are sending with a producerWindowSize set. One good way to get help is to create a test case and submit it via a Jira issue to the NMS.ActiveMQ site so that we can look into it using your test code. There have been many fixes since 1.5.1 so I'd recommend giving that new version a try as it could already be a non-issue.

XHR Bandwidth reduction

So were using XHR to validate pages exists and they have content, but as we do a lot of request we wanted to trim down some of the bandwidth used.
We thought about using a HEAD request to check for !200 and then thought well that's still 2 request's if the page exists and then we come up with this sample code
Ajax.prototype.get = function (location, callback)
{
var Request = new XMLHttpRequest();
Request.open("GET", location, true);
Request.onreadystatechange = function ()
{
if(Request.readyState === Request.HEADERS_RECEIVED)
{
if(Request.status != 200)
{
//Ignore the data to save bandwidth
callback(Request);
Request.abort();
}
else
{
//#Overide the callback here to assure asingle callback fire
Request.onreadystatechange = function()
{
if (Request.readyState === Request.DONE)
{
callback(Request);
}
}
}
}
}
Request.send(null);
}
What I would like to know is does this actually work, or does the response body always come back to the client.
Thanks
I won't give a definitve answer but I have some thoughts that are to long for a comment.
Theoretically, a abortion of the request should cause the underlying connection to be closed. Assuming a TCP based communication that means sending a FIN to the server, which should then stop sending data and ACKs the FIN. But this is HTTP and there might be other magic going on (like connection pipelining, etc.)...
Anyway, when you close the connection early, the client will receive all data that was send in communication delay as the server will at least keep sending until he gets the the STOP signal. If you have a medium delay and a high bandwith connection this could be a lot of data and will, depending on the amount of data, most likely be a good portion of the complete data.
Note that, while the code will not receive any of this data, it will be transferred to the network device of the client and will be at least passed a little bit up the network stack. So, while this data never receives you application level, the bandwith will be consumed anyway.
My (educated) guess is that it will not save as much as you would like (under "normal" conditions). I would suggest that you do a real world test and see if it is worth the afford.

Resources