Packets of length 1400 not being received from a specific server - amazon-ec2

This is regarding the amazon hosting. Sometimes, the server doesn't respond with packets greater than a specific length (about 1400 characters). So any operation would work if it involve downloading/reading of packets within that length; while the requests greater than these lengths are not received by the client. Every other website works.
For example, if you are making a WinSCP connection, authentication etc. will be done but it won't be able to execute the commands as simple as "ls" or listing the directory contents because of the packet length issues.
This only happens at a few clients, sometimes and starts working when the network is restarted. The behavior is consistent across all the protocols.

Related

How to solve problem of high data rates of flow, hits switches that does not have flowrule for that flow, in openflow mininet?

I am using iperf traffic generation and hard timeout as extension to simple_switch_13.py code in mininet with RYU SDN. I am using linear topology with 8 switches. I set the hard timeout to 5 seconds.
I am working with only one flow. I started the iperf traffic between two hosts(let's say h1 to h7. the terms used are same as terms used in mininet linear topology) for 10 seconds. When the flow started arp packets packets are generated in the network. After that a arp reply from h7 is sent to h1 which creates seven packet in messges from (s7, s6, ... , s1) and respective flowrule is installed in the switches and finally reaches h1. Then h1 sends tcp flow to h7 which also creates seven packet in messages from (s1, s2, ... , s7) and respective flow rule is installed in the switches and reaches h7. So far everything worked fine.
But once the timeout(5 seconds) is completed the flow rule in the switches is deleted. because flow is still in network what actually should happen is controller should send one packet-in message and buffer the rest of the packets so that when the respective flow rule is installed in flow table of the switch then the buffered packets will use the installed flow rule to pass the switch. But that is not happening. The controller is getting a lot of "packet-in" messages before the flow rule get installed into the switch(every packet that came to switch is coming to controller). What might be the reason for the lot of packet in messages. Is the buffers of the switch not working fine (but i am getting packet-in messages with some buffer_id). How to solve this issue?
This is also happening with idle-timeout and udp flow in the starting.(i.e. when h1 starts communication with h7) the switches along the path are generating lot of packet-in messages.
After doing a lot of research I understand that it is not problem with hard timeout or idle timeout. It is happening when a flow with high data rate hits the switch and switch didn't have that flow rule, then it is sending a lot of packet-in messages for the same flow. It is not queuing(or may be not storing the rest of the flow packets after sending one packet-in for that respective flow) those packets. How to solve this issue in mininet?

HTTP2 ERR CONNECTION CLOSED (Too much overhead)

We are developing a project using Angular in the front and Spring at the backend. Nothing new. But we have set-up the backend to use HTTP2 and from time to time we find weird problems.
Today I started playing with "Network Log Export" from chrome and I found this interesting piece of information in the HTTP2_SESSION line of the log.
t=43659 [st=41415] HTTP2_SESSION_RECV_GOAWAY
--> active_streams = 4
--> debug_data = "Connection [263], Too much overhead so the connection will be closed"
--> error_code = "11 (ENHANCE_YOUR_CALM)"
--> last_accepted_stream_id = 77
--> unclaimed_streams = 0
t=43659 [st=41415] HTTP2_SESSION_CLOSE
--> description = "Connection closed"
--> net_error = -100 (ERR_CONNECTION_CLOSED)
t=43661 [st=41417] HTTP2_SESSION_POOL_REMOVE_SESSION
t=43661 [st=41417] -HTTP2_SESSION
It looks like the root of the problem for the ERR_CONNECTION_CLOSED is the server decides there are too much overhead from the same client and closes the connection.
The question is ¿Can we tune the server to accept overhead up to a certain limit? ¿how? I believe this is something we should be able to tune up in Spring or tomcat or somewhere there.
Cheers
Ignacio
The overhead protection was put in place in response to a collection of CVE's reported against HTTP/2 in the middle of 2019. While Tomcat wasn't directly affected (the malicious input didn't trigger excessive load) we did take steps to block input that matched the malicious profile.
From your GitHub comment, you see issues with POSTs. That strongly suggests that the client is sending the POST data in multiple small packets rather than a smaller number of larger packets. Some clients (e.g. Chrome) are know to do this occasionally due to they way they buffer data.
A number of the HTTP/2 DoS attacks could be summarized as sending more overhead than data. While Tomcat wasn't directly affected, we took the decision to monitor for clients operating in this way and drop connections if any were found on the grounds that the client was likely to be malicious.
Generally, data packets reduce the overhead count, non-data packets increase the overhead count and (potentially) malicious packets increase the overhead count significantly. The idea is that an established, generally well-behaved, connection should be able to survive the occasional 'suspect' packet but any more than that will quickly trigger the connection to be closed.
In terms of small POST packets the key configuration setting is:
overheadCountFactor
overheadDataThreshold
The overhead count starts at -10. For every DATA frame received it is reduced by 1. For every SETTINGS, PRIORITY and PING frame it is increased by overheadCountFactor.If the overhead count goes above 0, the connection is closed.
In addition, if the average size of a received non-final DATA frame and the previously received DATA frame (on that same stream) is less than overheadDataThreshold then the overhead count is increased by overheadDataThreshold/(average size of current and previous DATA frames). In this way, the smaller the DATA frame, the greater the increase in the overhead. A small number of small non-final DATA frames should be enough to trigger connection closure.
The averaging is there so buffering such as exhibited by Chrome does not trigger the overhead protection.
To diagnose this problem you need to look at the logs to see what size non-final DATA frames are being sent by the client. I suspect that will show a series of non-final DATA frames with size less than 1024 (the default for overheadDataThreshold).
To fix the issue my recommendation is to look at the client first. Why is it sending small non-final DATA frames and what can be done to stop it?
If you need an immediate mitigation then you can reduce overheadDataThreshold. The information you get on DATA frame sizes sent by the client should guide you as to what to set this to. It needs to be smaller than DATA frames being sent by the client. In extremis you can set overheadDataThreshold to zero to disable the protection.

Data structure used for a buffer

I attended a developer interview recently and I was asked the following question:
I have a server that can handle 20 requests. Which data structure is used to model this? What will happen if thee are more than 20 requests? i.e., What will you do in case of buffer overflow?
I am not from CS background. I am transitioning from a different field. I am self taught in programming and DSA. So I would like to know the answers for these questions. Thanking in advance!
Regarding a server that can handle 20 simultaneous requests:
Your question indicates that you are not yet thinking about this is in a reasonable way and are probably quite far from understanding how it works. No problem -- it just means that maybe you have more to learn than you expect.
To help you along, I will write you the correct answer, full of terms you can google for:
When a client attempts to connect to your server, the kernel puts his request in to a 'listen queue' attached to your server's listening 'socket'.
When your server is ready to service a request, it 'accepts' a connection from the listening socket, which creates a new socket for the communication between the client and server, and the server then processes the request.
If your server can handle 20 simultaneous requests, it typically means that it can have up to 20 threads processing connections at the same time. That is usually accomplished by using a 'thread pool' of limited size. When a thread in the pool is available, it gets a new connection from the listening socket (might have to wait for one), and processes it, and it is only the fact that there are at most 20 of these threads that limits the number of request you will handle simultaneously. (nothing to do with a buffer of any kind, really)
If the server is already processing 20 simultaneous requests when a new one comes in, then the client's request will wait in the socket listen queue until the server eventually picks it up, or it will timeout and fail if it has been waiting too long.
There is also a limit (the TCP backlog) on the number of connection requests that can be waiting in the listen queue. If a connection request comes in when the listen queue is full, it is immediately rejected. If you want your server to handle 20 simultaneous requests, then the listen queue should have length at least 20 in case 20 requests arrive at the same time -- they will all get queued until your server picks them up.

What does multiplexing mean in HTTP/2

Could someone please explain multiplexing in relation to HTTP/2 and how it works?
Put simply, multiplexing allows your Browser to fire off multiple requests at once on the same connection and receive the requests back in any order.
And now for the much more complicated answer...
When you load a web page, it downloads the HTML page, it sees it needs some CSS, some JavaScript, a load of images... etc.
Under HTTP/1.1 you can only download one of those at a time on your HTTP/1.1 connection. So your browser downloads the HTML, then it asks for the CSS file. When that's returned it asks for the JavaScript file. When that's returned it asks for the first image file... etc. HTTP/1.1 is basically synchronous - once you send a request you're stuck until you get a response. This means most of the time the browser is not doing very much, as it has fired off a request, is waiting for a response, then fires off another request, then is waiting for a response... etc. Of course complex sites with lots of JavaScript do require the Browser to do lots of processing, but that depends on the JavaScript being downloaded so, at least for the beginning, the delays inherit to HTTP/1.1 do cause problems. Typically the server isn't doing very much either (at least per request - of course they add up for busy sites), because it should respond almost instantly for static resources (like CSS, JavaScript, images, fonts... etc.) and hopefully not too much longer even for dynamic requests (that require a database call or the like).
So one of the main issues on the web today is the network latency in sending the requests between browser and server. It may only be tens or perhaps hundreds of millisecond, which might not seem much, but they add up and are often the slowest part of web browsing - especially as websites get more complex and require extra resources (as they are getting) and Internet access is increasingly via mobile (with slower latency than broadband).
As an example let's say there are 10 resources that your web page needs to load after the HTML is loaded itself (which is a very small site by today's standards as 100+ resources is common, but we'll keep it simple and go with this example). And let's say each request takes 100ms to travel across the Internet to web server and back and the processing time at either end is negligible (let's say 0 for this example for simplicity sake). As you have to send each resource and wait for a response one at a time, this will take 10 * 100ms = 1,000ms or 1 second to download the whole site.
To get around this, browsers usually open multiple connections to the web server (typically 6). This means a browser can fire off multiple requests at the same time, which is much better, but at the cost of the complexity of having to set-up and manage multiple connections (which impacts both browser and server). Let's continue the previous example and also say there are 4 connections and, for simplicity, let's say all requests are equal. In this case you can split the requests across all four connections, so two will have 3 resources to get, and two will have 2 resources to get totally the ten resources (3 + 3 + 2 + 2 = 10). In that case the worst case is 3 round times or 300ms = 0.3 seconds - a good improvement, but this simple example does not include the cost of setting up those multiple connections, nor the resource implications of managing them (which I've not gone into here as this answer is long enough already but setting up separate TCP connections does take time and other resources - to do the TCP connection, HTTPS handshake and then get up to full speed due to TCP slow start).
HTTP/2 allows you to send off multiple requests on the same connection - so you don't need to open multiple connections as per above. So your browser can say "Gimme this CSS file. Gimme that JavaScript file. Gimme image1.jpg. Gimme image2.jpg... Etc." to fully utilise the one single connection. This has the obvious performance benefit of not delaying sending of those requests waiting for a free connection. All these requests make their way through the Internet to the server in (almost) parallel. The server responds to each one, and then they start to make their way back. In fact it's even more powerful than that as the web server can respond to them in any order it feels like and send back files in different order, or even break each file requested into pieces and intermingle the files together. This has the secondary benefit of one heavy request not blocking all the other subsequent requests (known as the head of line blocking issue). The web browser then is tasked with putting all the pieces back together. In best case (assuming no bandwidth limits - see below), if all 10 requests are fired off pretty much at once in parallel, and are answered by the server immediately, this means you basically have one round trip or 100ms or 0.1 seconds, to download all 10 resources. And this has none of the downsides that multiple connections had for HTTP/1.1! This is also much more scalable as resources on each website grow (currently browsers open up to 6 parallel connections under HTTP/1.1 but should that grow as sites get more complex?).
This diagram shows the differences, and there is an animated version too.
Note: HTTP/1.1 does have the concept of pipelining which also allows multiple requests to be sent off at once. However they still had to be returned in order they were requested, in their entirety, so nowhere near as good as HTTP/2, even if conceptually it's similar. Not to mention the fact this is so poorly supported by both browsers and servers that it is rarely used.
One thing highlighted in below comments is how bandwidth impacts us here. Of course your Internet connection is limited by how much you can download and HTTP/2 does not address that. So if those 10 resources discussed in above examples are all massive print-quality images, then they will still be slow to download. However, for most web browser, bandwidth is less of a problem than latency. So if those ten resources are small items (particularly text resources like CSS and JavaScript which can be gzipped to be tiny), as is very common on websites, then bandwidth is not really an issue - it's the sheer volume of resources that is often the problem and HTTP/2 looks to address that. This is also why concatenation is used in HTTP/1.1 as another workaround, so for example all CSS is often joined together into one file: the amount of CSS downloaded is the same but by doing it as one resource there are huge performance benefits (though less so with HTTP/2 and in fact some say concatenation should be an anti-pattern under HTTP/2 - though there are arguments against doing away with it completely too).
To put it as a real world example: assume you have to order 10 items from a shop for home delivery:
HTTP/1.1 with one connection means you have to order them one at a time and you cannot order the next item until the last arrives. You can understand it would take weeks to get through everything.
HTTP/1.1 with multiple connections means you can have a (limited) number of independent orders on the go at the same time.
HTTP/1.1 with pipelining means you can ask for all 10 items one after the other without waiting, but then they all arrive in the specific order you asked for them. And if one item is out of stock then you have to wait for that before you get the items you ordered after that - even if those later items are actually in stock! This is a bit better but is still subject to delays, and let's say most shops don't support this way of ordering anyway.
HTTP/2 means you can order your items in any particular order - without any delays (similar to above). The shop will dispatch them as they are ready, so they may arrive in a different order than you asked for them, and they may even split items so some parts of that order arrive first (so better than above). Ultimately this should mean you 1) get everything quicker overall and 2) can start working on each item as it arrives ("oh that's not as nice as I thought it would be, so I might want to order something else as well or instead").
Of course you're still limited by the size of your postman's van (the bandwidth) so they might have to leave some packages back at the sorting office until the next day if they are full up for that day, but that's rarely a problem compared to the delay in actually sending the order across and back. Most of web browsing involves sending small letters back and forth, rather than bulky packages.
Since #Juanma Menendez answer is correct while his diagram is confusing, I decided to improve upon it, clarifying the difference between multiplexing and pipelining, the notions that are often conflated.
Pipelining (HTTP/1.1)
Multiple requests are sent over the same HTTP connection. Responses are received in the same order. If the first response takes a lot of time, other responses have to wait in line. Similar to CPU pipeling where an instruction is fetched while another one is being decoded. Multiple instructions are in flight at the same time, but their order is preserved.
Multiplexing (HTTP/2)
Multiple requests are sent over the same HTTP connection. Responses are received in the arbitrary order. No need to wait for a slow response that's blocking others. Similar to out-of-order instruction execution in modern CPUs.
Hopefully the improved image clarifies the difference:
Request multiplexing
HTTP/2 can send multiple requests for data in parallel over a single TCP connection. This is the most advanced feature of the HTTP/2 protocol because it allows you to download web files asynchronously from one server. Most modern browsers limit TCP connections to one server. This reduces the additional round trip time (RTT), making your website load faster without any optimization, and makes domain sharding unnecessary.
Multiplexing in HTTP 2.0 is the type of relationship between the browser and the server that use a single connection to deliver multiple requests and responses in parallel, creating many individual frames in this process.
Multiplexing breaks away from the strict request-response semantics and enables one-to-many or many-to-many relationships.
Simple Ans (Source) :
Multiplexing means your browser can send multiple requests and receive multiple responses "bundled" into a single TCP connection. So the workload associated with DNS lookups and handshakes is saved for files coming from the same server.
Complex/Detailed Ans:
Look out the answer provided by #BazzaDP.

Windows Socket TCP Client Receives data only every 200ms (QTCPSocket)

I am using QTCPSocket to connect to a TCP server (which is running on Ubuntu). The server is sending at minimum, a 1 byte packet every 40ms. My application is real-time, so it is important I receive data as fast as possible at the cost of extra network traffic.
Once I have connected a TCP Client from Windows, I start receiving packets. However, the readyRead() signal from the QTCPSocket is only emitted once every 200ms (with 5 bytes in the packet). I have looked at the packets in Wireshark, they are actually 5 byte packets coming across.
However, using QTCPSocket on Mac (the exact same code in fact), I get individual packets every time, all of my 1 byte packets sent arrive as single byte packets, which is great.
I tried creating a raw Windows socket (not using QTCPSocket), and get identical behaviour to QTCPSocket on Windows.
What is the difference causing the Mac socket to receive packets at a much higher time resolution? Is there something I can set in setsockopt() which will prevent this 200ms buffering from occuring?
I am aware that setting TCP_NODELAY on the server side will probably solve my problem, but seeing as the Mac TCP Client works as intended, there must be a way to get the same behaviour on Windows.
Setting mySocket->setSocketOption(QAbstractSocket::LowDelayOption, 1); on the server side is the only way I have found to remedy this problem
For others who stumble upon this coming from search engines:
The above (correct) answer by oggmonster can also be described by:
int on = 1;
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&on, sizeof(on)))
{
return -1;
}
You need to acknowledge each byte of data you receive to give the reply ACKs some data to piggyback on. Talk to whoever designed your protocol.
Trying to answer questions like "Why does it work on X and not on Y" is only useful when both behaviors aren't correct. If it has no application-level acknowledgements, then both behaviors are correct. If one of them shouldn't be correct, then the protocol should have a mechanism to control that, such as application-layer acknowledgements. If it doesn't the protocol is broken. Trying to figure out why a broken protocol doesn't work is pointless -- it doesn't work because it's broken.

Resources