HTTP2 ERR CONNECTION CLOSED (Too much overhead) - spring-boot

We are developing a project using Angular in the front and Spring at the backend. Nothing new. But we have set-up the backend to use HTTP2 and from time to time we find weird problems.
Today I started playing with "Network Log Export" from chrome and I found this interesting piece of information in the HTTP2_SESSION line of the log.
t=43659 [st=41415] HTTP2_SESSION_RECV_GOAWAY
--> active_streams = 4
--> debug_data = "Connection [263], Too much overhead so the connection will be closed"
--> error_code = "11 (ENHANCE_YOUR_CALM)"
--> last_accepted_stream_id = 77
--> unclaimed_streams = 0
t=43659 [st=41415] HTTP2_SESSION_CLOSE
--> description = "Connection closed"
--> net_error = -100 (ERR_CONNECTION_CLOSED)
t=43661 [st=41417] HTTP2_SESSION_POOL_REMOVE_SESSION
t=43661 [st=41417] -HTTP2_SESSION
It looks like the root of the problem for the ERR_CONNECTION_CLOSED is the server decides there are too much overhead from the same client and closes the connection.
The question is ¿Can we tune the server to accept overhead up to a certain limit? ¿how? I believe this is something we should be able to tune up in Spring or tomcat or somewhere there.
Cheers
Ignacio

The overhead protection was put in place in response to a collection of CVE's reported against HTTP/2 in the middle of 2019. While Tomcat wasn't directly affected (the malicious input didn't trigger excessive load) we did take steps to block input that matched the malicious profile.
From your GitHub comment, you see issues with POSTs. That strongly suggests that the client is sending the POST data in multiple small packets rather than a smaller number of larger packets. Some clients (e.g. Chrome) are know to do this occasionally due to they way they buffer data.
A number of the HTTP/2 DoS attacks could be summarized as sending more overhead than data. While Tomcat wasn't directly affected, we took the decision to monitor for clients operating in this way and drop connections if any were found on the grounds that the client was likely to be malicious.
Generally, data packets reduce the overhead count, non-data packets increase the overhead count and (potentially) malicious packets increase the overhead count significantly. The idea is that an established, generally well-behaved, connection should be able to survive the occasional 'suspect' packet but any more than that will quickly trigger the connection to be closed.
In terms of small POST packets the key configuration setting is:
overheadCountFactor
overheadDataThreshold
The overhead count starts at -10. For every DATA frame received it is reduced by 1. For every SETTINGS, PRIORITY and PING frame it is increased by overheadCountFactor.If the overhead count goes above 0, the connection is closed.
In addition, if the average size of a received non-final DATA frame and the previously received DATA frame (on that same stream) is less than overheadDataThreshold then the overhead count is increased by overheadDataThreshold/(average size of current and previous DATA frames). In this way, the smaller the DATA frame, the greater the increase in the overhead. A small number of small non-final DATA frames should be enough to trigger connection closure.
The averaging is there so buffering such as exhibited by Chrome does not trigger the overhead protection.
To diagnose this problem you need to look at the logs to see what size non-final DATA frames are being sent by the client. I suspect that will show a series of non-final DATA frames with size less than 1024 (the default for overheadDataThreshold).
To fix the issue my recommendation is to look at the client first. Why is it sending small non-final DATA frames and what can be done to stop it?
If you need an immediate mitigation then you can reduce overheadDataThreshold. The information you get on DATA frame sizes sent by the client should guide you as to what to set this to. It needs to be smaller than DATA frames being sent by the client. In extremis you can set overheadDataThreshold to zero to disable the protection.

Related

org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Lost connection to task manager

I'm working on a project that uses flink (Version - 1.4.2) for bulk data ingestion to my graph database (Janusgraph). Data ingestion has two phases, one is vertex data ingestion and the other is edge data ingestion to graph db. Vertex data ingestion runs without any issue but during edge ingestion i'm getting an error saying Lost connection to task manager taskmanagerName. The detailed error traceback from flink-taskmanager-b6f46f6c8-fgtlw is attached below:
2019-08-01 18:13:26,025 ERROR org.apache.flink.runtime.operators.BatchTask
- Error in task code: CHAIN Join(Remap EDGES id: TO) -> Map (Key Extractor) -> Combine (Deduplicate edges including bi-directional edges) (62/80)
org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Lost connection to task manager 'flink-taskmanager-b6f46f6c8-gcxnm/10.xx.xx.xx:6121'.
This indicates that the remote task manager was lost.
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientHandler.exceptionCaught(PartitionRequestClientHandler.java:146)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
at org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
at org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
at org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:79)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:253)
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:835)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:87)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:162)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.flink.shaded.netty4.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
... 6 more
For ease of understanding lets say:
flink-taskmanager-b6f46f6c8-gcxnm as TM1 and
flink-taskmanager-b6f46f6c8-fgtlw as TM2
On debugging I was able to find that the TM1 requested for ResultPartition (RPP) from TM2 and the TM2 started to send ResultPartition to TM1. But on checking the logs from TM1 we found that it waited for long time to get RP from TM2 but after some time it started to deregister the accepted task. We believe deregistring task after netty remote transport exception caused TM2 to send Lost Taskmanager error for the specific job.
Both taskmanagers are running in separate ec2 instance (m4.2xlarge). I have verified cpu and memory utilization of both instances and was able to see all metric within limit.
Can you please tell me why taskmanager is acting weird like this and a way to fix this issue.
Thanks in advance
Flushing Buffers to Netty
In the picture above, the credit-based flow control mechanics actually sit inside the “Netty Server” (and “Netty Client”) components and the buffer the RecordWriter is writing to is always added to the result subpartition in an empty state and then gradually filled with (serialised) records. But when does Netty actually get the buffer? Obviously, it cannot take bytes whenever they become available since that would not only add substantial costs due to cross-thread communication and synchronisation, but also make the whole buffering obsolete.
In Flink, there are three situations that make a buffer available for consumption by the Netty server:
a buffer becomes full when writing a record to it, or
the buffer timeout hits, or
a special event such as a checkpoint barrier is sent.
Flush after Buffer Full
The RecordWriter works with a local serialisation buffer for the current record and will gradually write these bytes to one or more network buffers sitting at the appropriate result subpartition queue. Although a RecordWriter can work on multiple subpartitions, each subpartition has only one RecordWriter writing data to it. The Netty server, on the other hand, is reading from multiple result subpartitions and multiplexing the appropriate ones into a single channel as described above. This is a classical producer-consumer pattern with the network buffers in the middle and as shown by the next picture. After (1) serialising and (2) writing data to the buffer, the RecordWriter updates the buffer’s writer index accordingly. Once the buffer is completely filled, the record writer will (3) acquire a new buffer from its local buffer pool for any remaining bytes of the current record - or for the next one - and add the new one to the subpartition queue. This will (4) notify the Netty server of data being available if it is not aware yet4. Whenever Netty has capacity to handle this notification, it will (5) take the buffer and send it along the appropriate TCP channel.
Image 1
We can assume it already got the notification if there are more finished buffers in the queue.
Flush after Buffer Timeout
In order to support low-latency use cases, we cannot only rely on buffers being full in order to send data downstream. There may be cases where a certain communication channel does not have too many records flowing through and unnecessarily increase the latency of the few records you actually have. Therefore, a periodic process will flush whatever data is available down the stack: the output flusher. The periodic interval can be configured via StreamExecutionEnvironment#setBufferTimeout and acts as an upper bound on the latency5 (for low-throughput channels). The following picture shows how it interacts with the other components: the RecordWriter serialises and writes into network buffers as before but concurrently, the output flusher may (3,4) notify the Netty server of data being available if Netty is not already aware (similar to the “buffer full” scenario above). When Netty handles this notification (5) it will consume the available data from the buffer and update the buffer’s reader index. The buffer stays in the queue - any further operation on this buffer from the Netty server side will continue reading from the reader index next time.
Image 2
Reference:
Below link may help you out.
flink-network-stack-details
Can you check the GC logs of TM1 and TM2 to see if there were full GCs which may cause heatbeat timeout.

WinHttpWriteData completion

I'm using WinHTTP to transfer large files to a PHP-based web server and I want to display the progress and an estimated speed. After reading the docs I have decided to use chunked transfer encoding. The files get transferred correctly but there is an issue with estimating the time that I cannot solve.
I'm using a loop to send chunks with WinHttpWriteData (header+trailer+footer) and I compute the time difference between start and finish with GetTickCount. I have a fixed bandwidth of 4mbit configured on my router in order to test the correctness of my estimation.
The typical time difference for chunks of 256KB is between 450 - 550ms, which is correct. The problem is that once in a while (few seconds/tens of seconds) WinHttpWriteData returns really really fast, like 4-10ms, which is obviously not possible. The next difference is much higher than the average 500ms.
Why does WinHttpWriteData confirms, either synchronously or asynchronously that it has written the data to the destination when, in reality, the data is still being transferred ? Any ideas ?
Oversimplified, my code looks like:
while (dataLeft)
{
t1 = GetTickCount();
WinHttpWriteData(hRequest, chunkHdr, chunkHdrLen , NULL);
waitWriteConfirm();
WinHttpWriteData(hRequest, actualData, actualDataLen , NULL);
waitWriteConfirm();
WinHttpWriteData(hRequest, chunkFtr, chunkFtrLen , NULL);
waitWriteConfirm();
t2 = GetTickCount();
tdif= t2 - t1;
}
This is simply the nature of how sockets work in general.
Whether you call a lower level function like send() or a higher level function like WinHttpWriteData(), the functions return success/failure based on whether they are able to pass data to the underlying socket kernel to not. The kernel queues up data for eventual transmission in the background. The kernel does not report back when the data is actually transmitted, or if the receiver acks the data. The kernel happily accepts new data as long as there is room in the queue, even if it will take awhile to actually transmit. Otherwise, it will block the sender until room becomes available in the queue.
If you need to monitor actual transmission speed, you have to monitor the low level network activity directly, such as with a packet sniffer or driver hook. Otherwise, you can only monitor how fast you are able to pass data to the kernel (which is usually good enough for most purposes).

What does multiplexing mean in HTTP/2

Could someone please explain multiplexing in relation to HTTP/2 and how it works?
Put simply, multiplexing allows your Browser to fire off multiple requests at once on the same connection and receive the requests back in any order.
And now for the much more complicated answer...
When you load a web page, it downloads the HTML page, it sees it needs some CSS, some JavaScript, a load of images... etc.
Under HTTP/1.1 you can only download one of those at a time on your HTTP/1.1 connection. So your browser downloads the HTML, then it asks for the CSS file. When that's returned it asks for the JavaScript file. When that's returned it asks for the first image file... etc. HTTP/1.1 is basically synchronous - once you send a request you're stuck until you get a response. This means most of the time the browser is not doing very much, as it has fired off a request, is waiting for a response, then fires off another request, then is waiting for a response... etc. Of course complex sites with lots of JavaScript do require the Browser to do lots of processing, but that depends on the JavaScript being downloaded so, at least for the beginning, the delays inherit to HTTP/1.1 do cause problems. Typically the server isn't doing very much either (at least per request - of course they add up for busy sites), because it should respond almost instantly for static resources (like CSS, JavaScript, images, fonts... etc.) and hopefully not too much longer even for dynamic requests (that require a database call or the like).
So one of the main issues on the web today is the network latency in sending the requests between browser and server. It may only be tens or perhaps hundreds of millisecond, which might not seem much, but they add up and are often the slowest part of web browsing - especially as websites get more complex and require extra resources (as they are getting) and Internet access is increasingly via mobile (with slower latency than broadband).
As an example let's say there are 10 resources that your web page needs to load after the HTML is loaded itself (which is a very small site by today's standards as 100+ resources is common, but we'll keep it simple and go with this example). And let's say each request takes 100ms to travel across the Internet to web server and back and the processing time at either end is negligible (let's say 0 for this example for simplicity sake). As you have to send each resource and wait for a response one at a time, this will take 10 * 100ms = 1,000ms or 1 second to download the whole site.
To get around this, browsers usually open multiple connections to the web server (typically 6). This means a browser can fire off multiple requests at the same time, which is much better, but at the cost of the complexity of having to set-up and manage multiple connections (which impacts both browser and server). Let's continue the previous example and also say there are 4 connections and, for simplicity, let's say all requests are equal. In this case you can split the requests across all four connections, so two will have 3 resources to get, and two will have 2 resources to get totally the ten resources (3 + 3 + 2 + 2 = 10). In that case the worst case is 3 round times or 300ms = 0.3 seconds - a good improvement, but this simple example does not include the cost of setting up those multiple connections, nor the resource implications of managing them (which I've not gone into here as this answer is long enough already but setting up separate TCP connections does take time and other resources - to do the TCP connection, HTTPS handshake and then get up to full speed due to TCP slow start).
HTTP/2 allows you to send off multiple requests on the same connection - so you don't need to open multiple connections as per above. So your browser can say "Gimme this CSS file. Gimme that JavaScript file. Gimme image1.jpg. Gimme image2.jpg... Etc." to fully utilise the one single connection. This has the obvious performance benefit of not delaying sending of those requests waiting for a free connection. All these requests make their way through the Internet to the server in (almost) parallel. The server responds to each one, and then they start to make their way back. In fact it's even more powerful than that as the web server can respond to them in any order it feels like and send back files in different order, or even break each file requested into pieces and intermingle the files together. This has the secondary benefit of one heavy request not blocking all the other subsequent requests (known as the head of line blocking issue). The web browser then is tasked with putting all the pieces back together. In best case (assuming no bandwidth limits - see below), if all 10 requests are fired off pretty much at once in parallel, and are answered by the server immediately, this means you basically have one round trip or 100ms or 0.1 seconds, to download all 10 resources. And this has none of the downsides that multiple connections had for HTTP/1.1! This is also much more scalable as resources on each website grow (currently browsers open up to 6 parallel connections under HTTP/1.1 but should that grow as sites get more complex?).
This diagram shows the differences, and there is an animated version too.
Note: HTTP/1.1 does have the concept of pipelining which also allows multiple requests to be sent off at once. However they still had to be returned in order they were requested, in their entirety, so nowhere near as good as HTTP/2, even if conceptually it's similar. Not to mention the fact this is so poorly supported by both browsers and servers that it is rarely used.
One thing highlighted in below comments is how bandwidth impacts us here. Of course your Internet connection is limited by how much you can download and HTTP/2 does not address that. So if those 10 resources discussed in above examples are all massive print-quality images, then they will still be slow to download. However, for most web browser, bandwidth is less of a problem than latency. So if those ten resources are small items (particularly text resources like CSS and JavaScript which can be gzipped to be tiny), as is very common on websites, then bandwidth is not really an issue - it's the sheer volume of resources that is often the problem and HTTP/2 looks to address that. This is also why concatenation is used in HTTP/1.1 as another workaround, so for example all CSS is often joined together into one file: the amount of CSS downloaded is the same but by doing it as one resource there are huge performance benefits (though less so with HTTP/2 and in fact some say concatenation should be an anti-pattern under HTTP/2 - though there are arguments against doing away with it completely too).
To put it as a real world example: assume you have to order 10 items from a shop for home delivery:
HTTP/1.1 with one connection means you have to order them one at a time and you cannot order the next item until the last arrives. You can understand it would take weeks to get through everything.
HTTP/1.1 with multiple connections means you can have a (limited) number of independent orders on the go at the same time.
HTTP/1.1 with pipelining means you can ask for all 10 items one after the other without waiting, but then they all arrive in the specific order you asked for them. And if one item is out of stock then you have to wait for that before you get the items you ordered after that - even if those later items are actually in stock! This is a bit better but is still subject to delays, and let's say most shops don't support this way of ordering anyway.
HTTP/2 means you can order your items in any particular order - without any delays (similar to above). The shop will dispatch them as they are ready, so they may arrive in a different order than you asked for them, and they may even split items so some parts of that order arrive first (so better than above). Ultimately this should mean you 1) get everything quicker overall and 2) can start working on each item as it arrives ("oh that's not as nice as I thought it would be, so I might want to order something else as well or instead").
Of course you're still limited by the size of your postman's van (the bandwidth) so they might have to leave some packages back at the sorting office until the next day if they are full up for that day, but that's rarely a problem compared to the delay in actually sending the order across and back. Most of web browsing involves sending small letters back and forth, rather than bulky packages.
Since #Juanma Menendez answer is correct while his diagram is confusing, I decided to improve upon it, clarifying the difference between multiplexing and pipelining, the notions that are often conflated.
Pipelining (HTTP/1.1)
Multiple requests are sent over the same HTTP connection. Responses are received in the same order. If the first response takes a lot of time, other responses have to wait in line. Similar to CPU pipeling where an instruction is fetched while another one is being decoded. Multiple instructions are in flight at the same time, but their order is preserved.
Multiplexing (HTTP/2)
Multiple requests are sent over the same HTTP connection. Responses are received in the arbitrary order. No need to wait for a slow response that's blocking others. Similar to out-of-order instruction execution in modern CPUs.
Hopefully the improved image clarifies the difference:
Request multiplexing
HTTP/2 can send multiple requests for data in parallel over a single TCP connection. This is the most advanced feature of the HTTP/2 protocol because it allows you to download web files asynchronously from one server. Most modern browsers limit TCP connections to one server. This reduces the additional round trip time (RTT), making your website load faster without any optimization, and makes domain sharding unnecessary.
Multiplexing in HTTP 2.0 is the type of relationship between the browser and the server that use a single connection to deliver multiple requests and responses in parallel, creating many individual frames in this process.
Multiplexing breaks away from the strict request-response semantics and enables one-to-many or many-to-many relationships.
Simple Ans (Source) :
Multiplexing means your browser can send multiple requests and receive multiple responses "bundled" into a single TCP connection. So the workload associated with DNS lookups and handshakes is saved for files coming from the same server.
Complex/Detailed Ans:
Look out the answer provided by #BazzaDP.

boost::asio sending data faster than receiving over TCP. Or how to disable buffering

I have created a client/server program, the client starts
an instance of Writer class and the server starts an instance of
Reader class. Writer will then write a DATA_SIZE bytes of data
asynchronously to the Reader every USLEEP mili seconds.
Every successive async_write request by the Writer is done
only if the "on write" handler from the previous request had
been called.
The problem is, If the Writer (client) is writing more data into the
socket than the Reader (server) is capable of receiving this seems
to be the behaviour:
Writer will start writing into (I think) system buffer and even
though the data had not yet been received by the Reader it will be
calling the "on write" handler without an error.
When the buffer is full, boost::asio won't fire the "on write"
handler anymore, untill the buffer gets smaller.
In the meanwhile, the Reader is still receiving small chunks
of data.
The fact that the Reader keeps receiving bytes after I close
the Writer program seems to prove this theory correct.
What I need to achieve is to prevent this buffering because the
data need to be "real time" (as much as possible).
I'm guessing I need to use some combination of the socket options that
asio offers, like the no_delay or send_buffer_size, but I'm just guessing
here as I haven't had success experimenting with these.
I think that the first solution that one can think of is to use
UDP instead of TCP. This will be the case as I'll need to switch to
UDP for other reasons as well in the near future, but I would
first like to find out how to do it with TCP just for the sake
of having it straight in my head in case I'll have a similar
problem some other day in the future.
NOTE1: Before I started experimenting with asynchronous operations in asio library I had implemented this same scenario using threads, locks and asio::sockets and did not experience such buffering at that time. I had to switch to the asynchronous API because asio does not seem to allow timed interruptions of synchronous calls.
NOTE2: Here is a working example that demonstrates the problem: http://pastie.org/3122025
EDIT: I've done one more test, in my NOTE1 I mentioned that when I was using asio::iosockets I did not experience this buffering. So I wanted to be sure and created this test: http://pastie.org/3125452 It turns out that the buffering is there event with asio::iosockets, so there must have been something else that caused it to go smoothly, possibly lower FPS.
TCP/IP is definitely geared for maximizing throughput as intention of most network applications is to transfer data between hosts. In such scenarios it is expected that a transfer of N bytes will take T seconds and clearly it doesn't matter if receiver is a little slow to process data. In fact, as you noticed TCP/IP protocol implements the sliding window which allows the sender to buffer some data so that it is always ready to be sent but leaves the ultimate throttling control up to the receiver. Receiver can go full speed, pace itself or even pause transmission.
If you don't need throughput and instead want to guarantee that the data your sender is transmitting is as close to real time as possible, then what you need is to make sure the sender doesn't write the next packet until he receives an acknowledgement from the receiver that it has processed the previous data packet. So instead of blindly sending packet after packet until you are blocked, define a message structure for control messages to be sent back from the receiver back to the sender.
Obviously with this approach, your trade off is that each sent packet is closer to real-time of the sender but you are limiting how much data you can transfer while slightly increasing total bandwidth used by your protocol (i.e. additional control messages). Also keep in mind that "close to real-time" is relative because you will still face delays in the network as well as ability of the receiver to process data. So you might also take a look at the design constraints of your specific application to determine how "close" do you really need to be.
If you need to be very close, but at the same time you don't care if packets are lost because old packet data is superseded by new data, then UDP/IP might be a better alternative. However, a) if you have reliable deliver requirements, you might ends up reinventing a portion of tcp/ip's wheel and b) keep in mind that certain networks (corporate firewalls) tend to block UDP/IP while allowing TCP/IP traffic and c) even UDP/IP won't be exact real-time.

what is a good Pattern for using AsyncSockets in .net35 when inititiating several client connections

I'm re-building an IM gateway and hope to take advantage of the new performance features in AsyncSockets for .net35.
My existing implementation simply creates packets and forwards IM requests from users to the various IM networks as required, handling request/ response streams for each connected users session(socket).
i presently have to coupe with IasyncResult and as you know it's not very pretty or scalable.
My confusion is this basically:
1) in using the new Begin/End and SocketAsyncEventArgs in 3.5 do we still need to create one SocketAsyncEventArgs per socket?
2) do we gain anything by pre-initializing say, 20000 client connections since we know the expected max_connections per server is 20000
3) do we still need to use a LOH (large object heap) allocated byte[] to handle receive data as shown in SocketServers example on MSDN, we are not building a server per say, but are still handling a lot of independent receives for each connected socket.
4) maybe there is a better pattern altogether for what i'm trying to acheive?
Thanks in advance.
Charles.
1) IAsyncResult/Begin/End is a completely different system from The "xAsync" methods that use SocketAsyncEventArgs. You're better off using SocketAsyncEventArgs and dropping Begin/End entirely.
2) Not really. Initialize a smaller number (50? 100?) and use an intermediate class (ie/ a "resource pool") to manage them. As more requests come in, grow the pool by another 50 or 100 for example. The tough part is efficiently "scaling down" the number of pooled items as resource requirements drop. A large # of sockets/buffers/etc will consume a large amount of memory, so it's better to only allocate it in batches as the server requires it.
3) Don't need to use it, but it's still a good idea. The buffer will still be "pinned" during each call.

Resources