I'm developing a TCP Client-Server application.
I'm trying to get the compressed size of the packets I send to the client so I can compare that size with the uncompressed data size and get statistics about the compression ratio obtained.
On the Client side. I can get that information comparing the size of the sent/received data with the size I get from the OnSend/OnReceive events of TIdCompressionIntercept. Just getting the length of the ABuffer parameter of those event handlers.
But, on the Server side, the TIdServerCompressionIntercept does't have those events to hook.
So the question. How can I get the compressed size of the packets sent/received by the server, so I can compare those sizes with the raw data size of those packets?
Thanks.
Client side code sample:
var
FRawSentSize ,
FComrpessedSentSize ,
FRawReceivedSize ,
FCompressedReceivedSize : UInt64;
function TFrom1.SendAndReceive( const ToSend: String ): String;
begin
TCPClient.IOHandler.WriteLn( ToSend );
Inc( FRawSentSize, Length( ToSend ) );
Result := TCPClient.IOHandler.ReadLn;
Inc( FRawReceivedSize, Length( Result ) );
end;
function TForm1.CompressorSend( ASender: TIdConnectionIntercept; var ABuffer: TIdBytes );
begin
Inc( FComrpessedSentSize, Length( ABuffer ) );
end;
function TForm1.CompressorReceive( ASender: TIdConnectionIntercept; var ABuffer: TIdBytes );
begin
Inc( FCompressedReceivedSize, Length( ABuffer ) );
end;
TIdServerCompressionIntercept is not the intercept that does the actual work, so that is why it has no events.
When a new client connects to the server, TIdServerCompressionIntercept will create a TIdCompressionIntercept object that gets assigned to the AContext.Connection.IOHandler.Intercept property. That object does the actual work, just like in the client-side code. Your server's OnConnect or OnExecute event handler can programmably assign handlers to that object's events as needed.
Related
I get the following when communicating with a custom client.
With custom client i mean a housemade PCB with a FPGA which runs the Triple-Speed Ethernet Intel FPGA IP. It does not make a difference if a switch is between the PC and the PCB.
Workflow seen from the Server(Windows PC) where i detect this behaviour with wireshark:
Connect to client (Syn - Syn/Ack - Ack) Winsock2.connect
Sending data > MTU with Winsock2.WSASend (4092 Bytes on a 4088 Bytes MTU)
Packet gets "fragmented" into 2 packets - do not fragment bit is set
A Retransmission happens (because the client answered too slow?)
I am using delphi 10.4 and use the Winsock2 functions. Bevore each send i check with select if fdwrite is set if FD_Isset. Nagle is deactivated.
The "retransmission" does not happen everytime and i could not detect any sort of pattern when they occur. Except most of the time it is when the client needs more than 30ms to send his ACK.
When the "retransmission" happens it is not packet 1 or two whicht is re-send, but packet 1 with an offset of 60 which is the payload of packet 2. The sequence number of packet 1 is incremented by 60 too. Even the data is correct, it's correctly incremented by 60.
When I send 6000 Bytes i get the same behaviour with an incremented seq of 1968 which is correct too.
What is happening here?
Can i detect this with winsock2? Can i set the RTO with winsock? Why is the sequence number incremented and not packet 1, as it is, retransmitted?
Source Code of the send function:
function TZWinTCPSock.SendData (out ErrMsg : TAPILogStruct; SendOffset :
Cardinal = 0) : Boolean;
var
WSABuff : WSABUF;
res : Integer;
IPFlags : Cardinal;
t : Cardinal;
WSAErr : Cardinal;
begin
Result := FALSE;
WSAErr := WSAGetLastError;
try
if not CheckSockValid(ErrMsg) then // checks if fd_write is set
begin
exit(false);
end;
try
WSABuff.len := FMem.SendLength; // 4092 at this time Cardinal
WSABuff.buf := #FMem.SendData[SendOffset]; // 8192 Bytes reserved TArray<Byte>
IPFlags := 0;
res := WSASend(FSocket,#WSABuff,1,FMem.sentBytes,IPFlags,nil,nil);
if Res <> SOCKET_ERROR then
begin
if FMem.SendLength <> FMem.SentBytes then
begin
exit(false);
end
else
begin
Result := TRUE;
if WSAGetLastError <> WSAErr then // unexpected WSA error
begin
exit(FALSE);
end;
end;
end
else
begin
FLastWSAErr := WSAGetLastError;
if FLastWSAErr = WSAECONNRESET then
begin
Disconnect(ErrMsg);
exit(false);
end;
end;
except
on E : Exception do
begin
// Some error handling
end;
end;
finally
end;
end;
Edit1
The packets have the don't fragment Bit set.
I tried to detect this "retransmission" with the windows admin center, but I don't see anything popping up.
Got an answer on Microsoft Q&A.
Looks like it was a tail loss probe problem where the destination host is at fault, because the reply took too long and srtt timed out.
Using delphi 10.3 and JEDI VCL.
I have a communication with a device, which responds to UDP data.
Now I want to be able to check if I got an answer from the right device, or if if I even got any answer.
Currently I am using the following:
function TDIB.ReadData(ACommandCode: WORD; ASendLength : Cardinal; AReceiveLength : Cardinal; AAddress : Cardinal) : Integer;
var
cmdHeader : PDIBCommandHeader;
UDPSend, UDPRecv : TIdBytes;
client : TIdUDPClient;
begin
gRequestPending := TRUE;
// Reserviere Speicher
SetLength(UDPSend, SIzeOF(TDIBCommandHeader) + Cardinal(ASendLength));
SetLength(UDPRecv, SIzeOF(TDIBCommandHeader) + Cardinal(AReceiveLength));
cmdHeader := #UDPSend[0];
cmdHeader.Init(WORD(ACommandCode), AAddress, MAX(ASendLength, AReceiveLength));
client := TIdUDPClient.Create();
try
client.Host := ValToIPv4(gDIBAddress);
client.Port := TDIBPorts.mainPort;
client.Active := TRUE;
client.sendBuffer (UDPSend);
client.ReceiveBuffer(UDPRecv,TDIB.C_CMDTimeout);
except
on E: Exception do
begin
ShowMessage('Exception');
client.Free;
end;
end;
SetLength(lastUDPData, Length(UDPRecv));
move (UDPRecv[0],lastUDPData[0],Length(UDPRecv));
client.Free;
gRequestPending := FALSE;
end;
Which is fine when the client is responding, but I am not catching any misbehaviour, like when the host machine tries to reach the client and the client is not responding.
From the documentation of Indy10 I am missing something like TIdUDPClient.TimedOut or like that.
I want to be able to tell, if the client is not responding after Xms after I sent the UDP packet and I want to be able to check, if the sender address is the wanted client IP.
I want to be able to tell, if the client is not responding after Xms after I sent the UDP packet
ReceiveBuffer() returns the number of bytes actually received. If no packet is received within the specified timeout, ReceiveBuffer() will return 0.
I want to be able to check, if the sender address is the wanted client IP.
Use one of the ReceiveBuffer() overloads that has a VPeerIP output parameter. That will give you the sender IP if a packet is received, or it will give you an empty string if no packet is received.
Do be aware that UDP has a concept of a 0-byte datagram. ReceiveBuffer() will return 0 for that as well. In the case that 0 is returned, you can use this output string to differentiate between no packet received (VPeerIP = '') vs a 0-byte packet received (VPeerIP <> ''), if needed.
I am trying to write a detailed error message to the system log using the ReportEventW function. Unfortunately, I am encountering problems which are apparently related to the limits within the function but I can't find any real documentation of them: there is a documented limit on dwDataSize and another limit on the maximum length of each string. I am not violating any of these limits, but I am still receiving a FALSE and GetLastError reports RPC_S_INVALID_BOUND.
Through testing, I found that for my test case the limit is caused by the number of strings (wNumStrings), with 203 being the most I can put through correctly (additionally, for 204-206 strings the ReportEventW will return a TRUE but will not write to the log!). If I add 1024 dummy characters to the first line, I once again get an error and have to decrease the number of lines, as far as I can tell, by the same number of characters I added earlier, which would indicate that some total character limit on the whole message is coming to play. Unfortunately, I can't match it against any documented limit even if I ignore what the limits should apply to - my value of about 33300 characters is close to the value 31839 characters (max. length of each string), but sufficiently higher than that to make me discard the theory that the limit on a length of individual string also applies to the total length of the whole message. Apparently, if I add extra raw data, the limit goes down again, which suggests a limitation on the size of the whole event log record.
My questions are:
1) Does anyone know the actual limits for writing to the event log?
2) Do these limits change with the different operating systems? All my tests were performed on Win10 x64, but I have a nasty suspicion that with different OSes, I will encounter a different limitation.
3) Is this documented somewhere?
Thanks.
Actual code (added on request)
procedure WriteToEventLog(const Messages: array of string; const RawData: AnsiString);
const
MaxStringCount = High(Word); // je to WORD! Realne se limit zda byt mnohem mensi
MaxRawDataLen = 61440;
EmptyMessage = #0#0#0#0;
type
TPCharArray = array[0..65535] of PChar;
var
Handle: THandle;
Msgs: ^TPCharArray;
MsgCount: integer;
DataPtr: PAnsiChar;
DataLen: integer;
i: Integer;
begin
MsgCount := Length(Messages);
if MsgCount > MaxStringCount then
MsgCount := MaxStringCount;
Msgs := AllocMem(MsgCount * Sizeof(PChar));
try
for i := 0 to Pred(MsgCount) do
begin
if Messages[i] = ''
then Msgs[i] := EmptyMessage
else Msgs[i] := PChar(Messages[i]);
end;
if RawData = '' then
begin
DataPtr := nil;
DataLen := 0;
end
else
begin
DataPtr := #RawData[1];
DataLen := Length(RawData);
if DataLen > MaxRawDataLen then
DataLen := MaxRawDataLen;
end;
Handle := RegisterEventSource(nil, PChar(ParamStr(0)));
if Handle <> 0 then
begin
try
ReportEvent(Handle, EVENTLOG_ERROR_TYPE, 0, 0, nil, MsgCount, DataLen, Msgs, DataPtr);
finally
DeregisterEventSource(Handle);
end;
end;
finally
FreeMem(Msgs);
end;
end;
It is called with Messages array containing rows from an EurekaLog report (one row per message, about 300 rows).
I can't answer your questions comprehensively, but I just ran into a similar issue. I only used the wNumStrings and lpStrings parameters and, contrary to documentation, still received the RPC_S_INVALID_BOUND error code (1734). On a nagging suspicion, I reduced the number of strings to 256 and it worked. Sure enough, it failed with 257. This was true regardless of the size of the individual strings. There are probably upper limits for individual strings and total message size too, but I didn't bother figuring those out.
TL/DR: wNumStrings <= 256
I have a simple mobile app, that takes a series of photos and sends it via SendStream() to the connected Profile.
myTetherAppProfile.SendStream(myTetherManager.RemoteProfiles[idConnected],
'ImageData',
bmpStreamData);
The occurring problem here is that the receiver-app doesn't get all the image-streams depending on the connection-strength (The ResourceReceived-Event isn't triggered on the receiver-app).
This would be no problem if I get a response that the delivery failed. But I don't get this (SendStream() returns "True")
Is there a possibility other than implementing a "please answer with another message if you received my image"-function to achieve stable transmissions even with bad connection? Or is App-Tethering by default designed to be lossy?
Also after a big stack of images I sometimes get the "connection reset by peer"-error. (I'm not sure if this error is related to the actual problem, so I preferred posting it.)
Looking at the relevant code from System.Tether.AppProfile (XE8 version), it appears to be a bug. See my inline comments below. Please report to https://quality.embarcadero.com
function TTetheringAppProfile.SendStream(const AProfile: TTetheringProfileInfo; const Description: string;
const AStream: TStream): Boolean;
var
LProfileInfo: TTetheringProfileInfo;
LConnection: TTetheringConnection;
LCommand: TTetheringCommand;
begin
if not FindProfile(AProfile.ProfileIdentifier, LProfileInfo) then
raise ETetheringException.CreateFmt(SNoProfile, [AProfile.ProfileIdentifier]);
CheckProfileIsConnected(AProfile);
LConnection := GetConnectionTo(AProfile);
TMonitor.Enter(LConnection);
try
LCommand := LConnection.Protocol.SendCommandWithResponse(SendStreamCommand, Version, Description);
if LCommand.Command = SendStreamOkResponse then
begin
Result := LConnection.Protocol.TransferStream(AStream);
if Result then
begin <-- Result here is guaranteed to be True
LCommand := LConnection.Protocol.ReceiveCommand;
if LCommand.Command = SendStreamContentOKResponse then
Result := True; <-- Sets Result to True if succeeds,
<-- but nothing to set Result to False if call failed.
end;
end
else
Result := False;
finally
TMonitor.Exit(LConnection);
end;
end;
I've got a "send" routine in Delphi 6 that accepts a variable-sized block of data (a fixed-size header followed by varying amounts of data) and the routine eventually calls sendto() in Winsock. I've coded it two ways, once where the passed block is a var (somewhat misleading, but it works) and once where a pointer to the block is passed. A simple version used for benchmarking looks something like:
type
header = record destination, serialnumber: integer end;
pheader = ^header;
var
smallblock: record h: header; data: array[1..5] of integer end;
bigblock: record h: header; data: array[1..100] of integer end;
procedure send1(var h: header; size: integer);
begin
h.destination := 1; // typical header adjustments before sendto()
h.serialnumber := 2;
sendto(sock, h, size, 0, client, sizeof(client))
end;
procedure send2(p: pheader; size: cardinal);
begin
p^.destination := 1;
p^.serialnumber := 2;
sendto(sock, p^, size, 0, client, sizeof(client))
end;
procedure doit1;
begin
send1(smallblock.h, sizeof(smallblock));
send1(bigblock.h, sizeof(bigblock));
end;
procedure doit2;
begin
send2(#smallblock, sizeof(smallblock));
send2(#bigblock, sizeof(bigblock));
end;
The "send" routine will be called often, with many different block sizes, and should be as fast as possible. After doing a few runs of some simple benchmarks (by timing calls with gettickcount), the pointer technique (doit2) seems to run about 3% faster on my machine than the var technique (doit1), although I don't see any real difference between the two techniques in the object code (not that I'm an assembler guru).
Is the 3% an illusion due to my crude benchmarks, or is the pointer technique really beating the var technique?
There is no performance difference passing a var parameter versus a pointer parameter. They do exactly the same thing (pass a memory address), and compile to similar, if not identical, assembly code. So any benchmarking differences are likely to be caused by issues in the benchmarking itself, not in the code that is being benchmarked. GetTickCount() is not exactly the best benchmarking tool, for instance. The best way to time your code is to use an external profiler, like AQTime.
BTW, your doit2() test should be like this instead:
procedure doit2;
begin
send2(#(smallblock.h), sizeof(smallblock));
send2(#(bigblock.h), sizeof(bigblock));
end;