I'm trying to understand the TCP session. I tested a connection using TCP and I realise that initial the header options were with 20 bytes, but after the first ACK the header options were with 12 bytes.
Why the change? Because there isn't options available?
Some TCP options are only sent with the SYN packet:
Maximum segment size
Window scale
Select acknowledgement
TCP Alternate Checksum request
Looking at one of my network traces, the TCP header was 4 bytes larger in the SYN packet because of the maximum segment size option. You could use Wireshark to see which options are being sent in your traffic.
The Wikipedia page has more detail.
Related
I am using a Packet Capturing and Analysis Tool named as Cisco Joy for generating network flows.
Here is the link: https://github.com/cisco/joy
So Joy is a Packet capturing and analysis tool which uses a configuration file to capture Packets on a network interface and return json files as output in a directory.
I have configured Cisco Joy with AF_Packet to generate the network flows.
So I have been trying to process the network packets using tcpreplay on a virtual network interface at a speed of 3 GBPS but Joy is not receiving the packets at the same speed.
Actual: 450889 packets (397930525 bytes) sent in 1.06 seconds
Rated: 374307598.1 Bps, 2994.46 Mbps, 424122.22 pps
Flows: 12701 flows, 11947.01 fps, 450298 flow packets, 591 non-flow
Statistics for network device: vth0
Successful packets: 450889
Failed packets: 0
Truncated packets: 0
Retried packets (ENOBUFS): 0
Retried packets (EAGAIN): 0
Packets Received from Cisco Joy: 260850
So Here tcpreplay sent over 400k packets but Cisco Joy received only around 260k.
I have changed the buffer size of the length in which the packets have been captured but still I didn't find any resolution from that so anyone have any clue about this?
I need to find out what is the average size of request in gRPC when Im sending a string to server and receiving a string from server?
I read somewhere it should be around 20 Bytes but what I see in Network Monitor app is that the request is above 500 Bytes. So is it the smallest possible size of a gRPC message size or what?
For a single rpc, gRPC needs to do a few things:
Establish HTTP/2
(Optional) Establish TLS
Exchange headers for the RPCs (size depends on schema)
Exchange actual RPC messages (size depends on schema)
Close connection
gRPC is meant to be used for many RPCs on a single connection so the smallest possible rpc message is really the bytes used for 4.
[Edit]
I checked and the minimum data exchanged for an rpc is over 500 bytes, in terms of raw IP packets.
I used the gRPC helloworld.proto, changed to send an int32.
Inspecting packets in Wireshark showed the following IP packet totals:
1286 bytes to establish connection, exchange headers and do the first rpc
564 bytes for each subsequent rpc
176 bytes for client Shutdown
Of those 546 "minimum" bytes:
67% was TCP/IP overhead (acknowledgements, packet headers)
10% was "trailer" data sent after the rpc
I wonder about the differences I found regarding HTTP Client implementation,
There's options to choose HTTPClient 4 or Java
But for a simple case (www.google.com) Java implementation always show 0 in connect time and sent bytes Sampler results:
Connect Time: 0
Sent bytes:0
While HTTPClient 4 return different values each time as:
Connect Time: 100
Sent bytes:117
Request body is the same
GET http://www.google.com/
GET data:
[no cookies]
But Request Header differ while HttpClient sends also Host and User-Agent
Connection: keep-alive
Host: www.google.com
User-Agent: Apache-HttpClient/4.5.5 (Java/1.8.0_25)
Is there a valid reason for these differences?
EDIT
Just to make it more confusing, when choosing empty implementation (should use default) the connect time is always 0 but Sent bytes is never 0
Connect Time: 0
Sent bytes:117
Java and HttpClient are 2 different implementations that can be used by HTTP Request.
The java one is less rich than hc4 and for example it does not implement :
sent bytes metric computation
connection time metric
kerberos authentication
There are also other features missing in Java implementation.
When you select empty, the value in property «jmeter.httpsampler » is used, by default it’s hc4.
Standard squid config only logs one CONNECT line for any https transaction. What is being counted/timed by the reported bytes and duration fields in that line?
Got an answer via the squid-users mailing list [1]:
Unless you are using SSL-Bump or such to process the contents specially.
The duration is from the CONNECT message arriving to the time TCP close
is used to end the tunnel. The size should be the bytes sent to the
client (excluding the 200 reply message itself) during that time.
[1] http://lists.squid-cache.org/pipermail/squid-users/2016-July/011714.html
when using ZMQ transfer data, the transmitted port is fast and the data is huge, but the receive port processing is slow and the data is accumulated between the two processes. Does any one know how to solve this problem? Thanks.
Instead of sending all the data at once, send in chunks instead. Somethings like this...
Client requests file 'xyz' from server
Server responds with file size only, ex: 10Mb
Client sets chunk size accordingly, ex: 1024b
Client sends read requests to server for chunks of data:
client -> server: give me 0 to 1023 bytes for file 'xyz'
server -> client: 1st chunk
client -> server: give me 1024 to 2047 bytes for file 'xyz'
server -> client: 2nd chunk
...and so on.
For each response, client persists chunk to disk.
This approach allows the client to throttle the rate at which data is transmitted from the server. Also, in case of network failure, since each chunk is persisted, there's no need to read file from beginning; the client can start requesting more chunks from the point before the last response failed.
You mentioned nothing on language bindings, but this solution should be trivial to implement in just about any language.