I am trying to write a detailed error message to the system log using the ReportEventW function. Unfortunately, I am encountering problems which are apparently related to the limits within the function but I can't find any real documentation of them: there is a documented limit on dwDataSize and another limit on the maximum length of each string. I am not violating any of these limits, but I am still receiving a FALSE and GetLastError reports RPC_S_INVALID_BOUND.
Through testing, I found that for my test case the limit is caused by the number of strings (wNumStrings), with 203 being the most I can put through correctly (additionally, for 204-206 strings the ReportEventW will return a TRUE but will not write to the log!). If I add 1024 dummy characters to the first line, I once again get an error and have to decrease the number of lines, as far as I can tell, by the same number of characters I added earlier, which would indicate that some total character limit on the whole message is coming to play. Unfortunately, I can't match it against any documented limit even if I ignore what the limits should apply to - my value of about 33300 characters is close to the value 31839 characters (max. length of each string), but sufficiently higher than that to make me discard the theory that the limit on a length of individual string also applies to the total length of the whole message. Apparently, if I add extra raw data, the limit goes down again, which suggests a limitation on the size of the whole event log record.
My questions are:
1) Does anyone know the actual limits for writing to the event log?
2) Do these limits change with the different operating systems? All my tests were performed on Win10 x64, but I have a nasty suspicion that with different OSes, I will encounter a different limitation.
3) Is this documented somewhere?
Thanks.
Actual code (added on request)
procedure WriteToEventLog(const Messages: array of string; const RawData: AnsiString);
const
MaxStringCount = High(Word); // je to WORD! Realne se limit zda byt mnohem mensi
MaxRawDataLen = 61440;
EmptyMessage = #0#0#0#0;
type
TPCharArray = array[0..65535] of PChar;
var
Handle: THandle;
Msgs: ^TPCharArray;
MsgCount: integer;
DataPtr: PAnsiChar;
DataLen: integer;
i: Integer;
begin
MsgCount := Length(Messages);
if MsgCount > MaxStringCount then
MsgCount := MaxStringCount;
Msgs := AllocMem(MsgCount * Sizeof(PChar));
try
for i := 0 to Pred(MsgCount) do
begin
if Messages[i] = ''
then Msgs[i] := EmptyMessage
else Msgs[i] := PChar(Messages[i]);
end;
if RawData = '' then
begin
DataPtr := nil;
DataLen := 0;
end
else
begin
DataPtr := #RawData[1];
DataLen := Length(RawData);
if DataLen > MaxRawDataLen then
DataLen := MaxRawDataLen;
end;
Handle := RegisterEventSource(nil, PChar(ParamStr(0)));
if Handle <> 0 then
begin
try
ReportEvent(Handle, EVENTLOG_ERROR_TYPE, 0, 0, nil, MsgCount, DataLen, Msgs, DataPtr);
finally
DeregisterEventSource(Handle);
end;
end;
finally
FreeMem(Msgs);
end;
end;
It is called with Messages array containing rows from an EurekaLog report (one row per message, about 300 rows).
I can't answer your questions comprehensively, but I just ran into a similar issue. I only used the wNumStrings and lpStrings parameters and, contrary to documentation, still received the RPC_S_INVALID_BOUND error code (1734). On a nagging suspicion, I reduced the number of strings to 256 and it worked. Sure enough, it failed with 257. This was true regardless of the size of the individual strings. There are probably upper limits for individual strings and total message size too, but I didn't bother figuring those out.
TL/DR: wNumStrings <= 256
Related
I get the following when communicating with a custom client.
With custom client i mean a housemade PCB with a FPGA which runs the Triple-Speed Ethernet Intel FPGA IP. It does not make a difference if a switch is between the PC and the PCB.
Workflow seen from the Server(Windows PC) where i detect this behaviour with wireshark:
Connect to client (Syn - Syn/Ack - Ack) Winsock2.connect
Sending data > MTU with Winsock2.WSASend (4092 Bytes on a 4088 Bytes MTU)
Packet gets "fragmented" into 2 packets - do not fragment bit is set
A Retransmission happens (because the client answered too slow?)
I am using delphi 10.4 and use the Winsock2 functions. Bevore each send i check with select if fdwrite is set if FD_Isset. Nagle is deactivated.
The "retransmission" does not happen everytime and i could not detect any sort of pattern when they occur. Except most of the time it is when the client needs more than 30ms to send his ACK.
When the "retransmission" happens it is not packet 1 or two whicht is re-send, but packet 1 with an offset of 60 which is the payload of packet 2. The sequence number of packet 1 is incremented by 60 too. Even the data is correct, it's correctly incremented by 60.
When I send 6000 Bytes i get the same behaviour with an incremented seq of 1968 which is correct too.
What is happening here?
Can i detect this with winsock2? Can i set the RTO with winsock? Why is the sequence number incremented and not packet 1, as it is, retransmitted?
Source Code of the send function:
function TZWinTCPSock.SendData (out ErrMsg : TAPILogStruct; SendOffset :
Cardinal = 0) : Boolean;
var
WSABuff : WSABUF;
res : Integer;
IPFlags : Cardinal;
t : Cardinal;
WSAErr : Cardinal;
begin
Result := FALSE;
WSAErr := WSAGetLastError;
try
if not CheckSockValid(ErrMsg) then // checks if fd_write is set
begin
exit(false);
end;
try
WSABuff.len := FMem.SendLength; // 4092 at this time Cardinal
WSABuff.buf := #FMem.SendData[SendOffset]; // 8192 Bytes reserved TArray<Byte>
IPFlags := 0;
res := WSASend(FSocket,#WSABuff,1,FMem.sentBytes,IPFlags,nil,nil);
if Res <> SOCKET_ERROR then
begin
if FMem.SendLength <> FMem.SentBytes then
begin
exit(false);
end
else
begin
Result := TRUE;
if WSAGetLastError <> WSAErr then // unexpected WSA error
begin
exit(FALSE);
end;
end;
end
else
begin
FLastWSAErr := WSAGetLastError;
if FLastWSAErr = WSAECONNRESET then
begin
Disconnect(ErrMsg);
exit(false);
end;
end;
except
on E : Exception do
begin
// Some error handling
end;
end;
finally
end;
end;
Edit1
The packets have the don't fragment Bit set.
I tried to detect this "retransmission" with the windows admin center, but I don't see anything popping up.
Got an answer on Microsoft Q&A.
Looks like it was a tail loss probe problem where the destination host is at fault, because the reply took too long and srtt timed out.
I have been using the Lazarus/FPC Blowfish library to encrypt file streams a while, and it works very well for me.
Now I tried to adapt the library to encrypt and decrypt arbitrary memory structures (records, but also strings), and got a problem which I could not resolve in days, so please help.
Problem is that strings having a length beeing an exact multiple of the Blowfish block size (8 Bytes) are encrypted and decrypted properly. If a string does not end at an exact 8 Byte boundary, the characters exceeding the boundary are mangled.
Here is the code (for the complete Lazarus project please follow the link)
Link to Lazarus project zip
Procedure BlowfishEncrypt(var Contents;ContentsLength:Integer;var Key:String);
// chop Contents into 64 bit blocks and encrypt using key
var
arrShadowContent:Array of Byte absolute Contents;
objBlowfish: TBlowFish;
BlowfishBlock: TBFBlock;
ptrBlowfishKey:PBlowFishKey;
p1,count,maxP:integer;
begin
ptrBlowfishKey := addr(Key[1]);
objBlowfish := TBlowFish.Create(ptrBlowfishKey^,Length(Key));
p1 := 0;
maxP := ContentsLength - 1;
count := SizeOf(BlowfishBlock);
while p1 < maxP do
begin
fillChar(BlowfishBlock,SizeOf(BlowfishBlock),0); // only for debugging
if p1 + count > maxP then
count := ContentsLength - p1;
Move(arrShadowContent[p1],BlowfishBlock,count);
objBlowfish.Encrypt(BlowfishBlock);
Move(BlowfishBlock,arrShadowContent[p1],count);
p1 := p1 + count;
end;
FreeAndNil(objBlowfish);
end;
And this is the call ...
procedure TForm1.CryptButtonClick(Sender: TObject);
var
ContentsBuffer,Key:String;
begin
ContentsBuffer := PlainTextEdit.Text;
Key := KeyTextEdit.Text;
BlowFishEncrypt(ContentsBuffer,Length(ContentsBuffer),Key);
CryptOutEdit.Text := ContentsBuffer;
BlowFishDecrypt(ContentsBuffer,Length(ContentsBuffer),Key);
CryptCheckEdit.Text := ContentsBuffer;
end;
And here you see what's happening:
Screenshot of Test GUI
Resolved. The problem is that I am not allowed to truncate a TBFBlock. For Blowfish Decryption to work, ContetLength must be a multiple of TBFBlock size.
My question was asked here before, but I'm trying to implement a neater solution for my project. So, as title states, I'm creating a complex installer of the server application that has to check local IP address and choose a open port, so that the application could be properly configured. Inno Setup 5.6.1 is used.
Getting local IP addresses was not a problem, this solution helped me a lot. Then it came to the port's checking and here I've found the following three options:
Using external DLL from installer. Actually, previous solution consisted of a C++ DLL, that exported two exact convenience functions, they worked great, installer used them, but sometimes, rarely on some Windows versions this DLL didn't want to get loaded causing error. That's why all the more messing with Pascal Script here.
Launching netstat via cmd and getting the output. This is still an option, though I feel this solution is crutchy like hell and would like to avoid it. Details could be found in other SO answer.
Getting information from WinAPI call. Looks best if possible.
As was mentioned above, getting IP address can be implemented via straightforward (ok, not really, it's a Pascal Script) WinAPI calls. So, I tried to do the same trick with ports, trying to call GetTcpTable():
[Code]
const
ERROR_INSUFFICIENT_BUFFER = 122;
function GetTcpTable(pTcpTable: Array of Byte; var pdwSize: Cardinal;
bOrder: WordBool): DWORD;
external 'GetTcpTable#IpHlpApi.dll stdcall';
{ ------------------------ }
function CheckPortIsOpen(port: Integer): Boolean;
var
TableSize : Cardinal;
Buffer : Array of Byte; { Alas, no pointers here }
RecordCount : Integer;
i, j : Integer;
portNumber : Cardinal;
IpAddr : String;
begin
Result := True;
TableSize := 0;
if GetTcpTable(Buffer, TableSize, False) = ERROR_INSUFFICIENT_BUFFER then
begin
SetLength(Buffer, TableSize);
if GetTcpTable(Buffer, TableSize, True) = 0 then
begin
{ some magic calculation from GetIpAddrTable calling example }
RecordCount := (Buffer[1] * 256) + Buffer[0];
For i := 0 to RecordCount -1 do
begin
portNumber := Buffer[i*20 + 8]; { Should work! }
{ Debugging code here }
if (i < 5) then begin
IpAddr := '';
For J := 0 to 3 do
begin
if J > 0 then
IpAddr := IpAddr + '_';
IpAddr := IpAddr + IntToStr(Buffer[I*20+ 4 + J]);
end;
SuppressibleMsgBox(IpAddr, mbError, MB_OK, MB_OK);
end;
{ ------ }
if port = portNumber then
Result := False;
end;
end;
end;
end;
This GetTcpTable also returns information about addresses and ports (table of TCP connections to be exact), so trying to get any connection address is good for debugging purposes. More about this attempt:
RecordCount is calculated the same way as in the code I used as an example, because obtained struct there is very similar to the nasty struct I need.
That i*20 + 8 is written that way, because 20 = sizeof(single record struct) and 8 = 2 * sizeof(DWORD). Local TCP connection address is being "parsed" one-by-one-byte at an offset of 1 DWORD, as you can see.
So, everything is great fun... it just is not working =((
And yes, I've tried to print all the bytes one-by-one, to search for the desired data manually and understand the correct offset. To my disappointment, nothing looking like IPs and ports was found, the numbers were quite mysterious.
I know that sometimes the simplest solution is best, not the smartest, but if anyone could give me a key cooking this WinAPI function in a proper way, I would be deeply grateful.
Your magic calculations are off.
portNumber := Buffer[i*20 + 8]; { Should work! }
Since Buffer is a byte array the above extracts one byte only. But the local port number is a DWORD in the TCP table. Though the documentation you linked states:
The local port number in network byte order for the TCP connection on the local computer.
The maximum size of an IP port number is 16 bits, so only the lower 16 bits should be used. The upper 16 bits may contain uninitialized data.
So we need two bytes. And we need to switch them, note "network byte order" above.
You are also forgetting to account for the 4 byte record count at the beginning of the table. So the local port number should become:
portNumber := Buffer[i*20 + 12] * 256 +
Buffer[i*20 + 13];
I don't work with Pascal very often so I apologise if this question is basic. I am working on a binary file program that writes an array of custom made records to a binary file.
Eventually I want it to be able to write multiple arrays of different custom record types to one single binary file.
For that reason I thought I would write an integer first being the number of bytes that the next array will be in total. Then I write the array itself. I can then read the first integer type block - to tell me the size of the next blocks to read in directly to an array.
For example - when writing the binary file I would do something like this:
assignfile(f,MasterFileName);
{$I-}
reset(f,1);
{$I+}
n := IOResult;
if n<> 0 then
begin
{$I-}
rewrite(f);
{$I+}
end;
n:= IOResult;
If n <> 0 then
begin
writeln('Error creating file: ', n);
end
else
begin
SetLength(MyArray, 2);
MyArray[0].ID := 101;
MyArray[0].Att1 := 'Hi';
MyArray[0].Att2 := 'MyArray 0 - Att2';
MyArray[0].Value := 1;
MyArray[1].ID := 102;
MyArray[1].Att1:= 'Hi again';
MyArray[1].Att2:= MyArray 1 - Att2';
MyArray[1].Value:= 5;
SizeOfArray := sizeOf(MyArray);
writeln('Size of character array: ', SizeOfArray);
writeln('Size of integer var: ', sizeof(SizeOfArray));
blockwrite(f,sizeOfArray,sizeof(SizeOfArray),actual);
blockwrite(f,MyArray,SizeOfArray,actual);
Close(f);
Then you could re-read the file with something like this:
Assign(f, MasterFileName);
Reset(f,1);
blockread(f,SizeOfArray,sizeof(SizeOfArray),actual);
blockread(f,MyArray,SizeOfArray,actual);
Close(f);
This has the idea that after these blocks have been read that you can then have a new integer recorded and a new array then saved etc.
It reads the integer parts of the records in but nothing for the strings. The record would be something like this:
TMyType = record
ID : Integer;
att1 : string;
att2 : String;
Value : Integer;
end;
Any help gratefully received!!
TMyType = record
ID : Integer;
att1 : string; // <- your problem
That field att1 declared as string that way means that the record contains a pointer to the actual string data (att1 is really a pointer). The compiler manages this pointer and the memory for the associated data, and the string can be any (reasonable) length.
A quick fix for you would be to declare att1 something like string[64], for example: a string which can be at maximum 64 chars long. That would eliminate the pointer and use the memory of the record (the att1 field itself, which now is a special static array) as buffer for string characters. Declaring the maximum length of the string, of course, can be slightly dangerous: if you try to assign the string a string too long, it will be truncated.
To be really complete: it depends on the compiler; some have a switch to make your declaration "string" usable, making it an alias for "string[255]". This is not the default though. Consider also that using string[...] is faster and wastes memory.
You have a few mistakes.
MyArray is a dynamic array, a reference type (a pointer), so SizeOf(MyArray) is the size of a pointer, not the size of the array. To get the length of the array, use Length(MyArray).
But the bigger problem is saving long strings (AnsiStrings -- the usual type to which string maps --, WideStrings, UnicodeStrings). These are reference types too, so you can't just save them together with the record. You will have to save the parts of the record one by one, and for strings, you will have to use a function like:
procedure SaveStr(var F: File; const S: AnsiString);
var
Actual: Integer;
Len: Integer;
begin
Len := Length(S);
BlockWrite(F, Len, SizeOf(Len), Actual);
if Len > 0 then
begin
BlockWrite(F, S[1], Len * SizeOf(AnsiChar), Actual);
end;
end;
Of course you should normally check Actual and do appropriate error handling, but I left that out, for simplicity.
Reading back is similar: first read the length, then use SetLength to set the string to that size and then read the rest.
So now you do something like:
Len := Length(MyArray);
BlockWrite(F, Len, SizeOf(Len), Actual);
for I := Low(MyArray) to High(MyArray) do
begin
BlockWrite(F, MyArray[I].ID, SizeOf(Integer), Actual);
SaveStr(F, MyArray[I].att1);
SaveStr(F, MyArray[I].att2);
BlockWrite(F, MyArray[I].Value, SizeOf(Integer), Actual);
end;
// etc...
Note that I can't currently test the code, so it may have some little errors. I'll try this later on, when I have access to a compiler, if that is necessary.
Update
As Marco van de Voort commented, you may have to do:
rewrite(f, 1);
instead of a simple
rewrite(f);
But as I replied to him, if you can, use streams. They are easier to use (IMO) and provide a more consistent interface, no matter to what exactly you try to write or read. There are streams for many different kinds of I/O, and all derive from (and are thus compatible with) the same basic abstract TStream class.
My setting:
OS: Windows 7 SP1 (32 bits)
Ram: 4 Go
Processor: Intel Pentium D 3.00 GHz
Delphi XE
My simple test:
I performed a test running the following program:
program TestAssign;
{$APPTYPE CONSOLE}
uses
SysUtils,
Diagnostics;
type
TTestClazz = class
private
FIntProp: Integer;
FStringProp: string;
protected
procedure SetIntProp(const Value: Integer);
procedure SetStringProp(const Value: string);
public
property IntProp: Integer read FIntProp write SetIntProp;
property StringProp: string read FStringProp write SetStringProp;
end;
{ TTestClazz }
procedure TTestClazz.SetIntProp(const Value: Integer);
begin
if FIntProp <> Value then
FIntProp := Value;
end;
procedure TTestClazz.SetStringProp(const Value: string);
begin
if FStringProp <> Value then
FStringProp := Value;
end;
var
i, j: Integer;
stopw1, stopw2 : TStopwatch;
TestObj: TTestClazz;
begin
ReportMemoryLeaksOnShutdown := True;
//
try
TestObj := TTestClazz.Create;
//
try
j := 10000;
while j <= 100000 do
begin
///
/// assignement
///
stopw1 := TStopwatch.StartNew;
for i := 0 to j do
begin
TestObj.FIntProp := 666;
TestObj.FStringProp := 'Hello';
end;
stopw1.Stop;
///
/// property assignement using Setter
///
stopw2 := TStopwatch.StartNew;
for i := 0 to j do
begin
TestObj.IntProp := 666;
TestObj.StringProp := 'Hello';
end;
stopw2.Stop;
///
/// Log results
///
Writeln(Format('Ellapsed time for %6.d loops: %5.d %5.d', [j, stopw1.ElapsedMilliseconds, stopw2.ElapsedMilliseconds]));
//
Inc(j, 5000);
end;
//
Writeln('');
Write('Press Return to Quit...');
Readln;
finally
TestObj.Free
end
except
on E: Exception do
Writeln(E.ClassName, ': ', E.Message);
end;
end.
My (provisionnal) conclusion:
It seems that:
It's worth using Setter with property under some condition
The overhead of calling a method and performing a conditional test take less time than an assignement.
My question:
Are those findings valid under any other diffrent setting or just localized ones (exception)?
I would make the following observations:
The decision as to whether or not to use a setter should be based on factors like code maintenance, correctness, readability rather than performance.
Your benchmark is wholly unreasonable since the if statements evaluate to False every time. Real world code that sets properties would be likely to modify the properties a reasonable proportion of the time that the setter runs.
I would expect that for many real world examples, the setter would run faster without the equality test. If that test were to evaluate to True every time then clearly the code would be quicker without it.
The integer setter is practically free and in fact the setter is slower than the direct field access.
The time is spent in the string property. Here there is some real performance benefit due to the optimisation of the if test which avoids string assignment code if possible.
The setters would be faster if you inlined them, but not by a significant amount.
My belief is that any real world code would never be able to detect any of these performance differences. In reality the bottleneck will be obtaining the values passed to the setters rather than time spent in the setters.
The main situation where such if protection is valuable is where the property modification is expensive. For example, perhaps it involves sending a Windows message, or hitting a database. For a property backed by a field you can probably take it or leave it.
In the chatter in the comments Premature Optimization wonders why the comparison if FStringProp <> Value is quicker than the assignment FStringProp := Value. I investigated a little further and it wasn't quite as I had originally thought.
It turns out that if FStringProp <> Value is dominated by a call to System._UStrEqual. The two strings passed are not in fact the same reference and so each character has to be compared. However, this code is highly optimised and crucially there are only 5 characters to compare.
The call to FStringProp := Value goes to System._UStrAsg and since Value is a literal with negative reference count, a brand new string has to be made. The Pascal version of the code looks like this:
procedure _UStrAsg(var Dest: UnicodeString; const Source: UnicodeString); // globals (need copy)
var
S, D: Pointer;
P: PStrRec;
Len: LongInt;
begin
S := Pointer(Source);
if S <> nil then
begin
if __StringRefCnt(Source) < 0 then // make copy of string literal
begin
Len := __StringLength(Source);
S := _NewUnicodeString(Len);
Move(Pointer(Source)^, S^, Len * SizeOf(WideChar));
end else
begin
P := PStrRec(PByte(S) - SizeOf(StrRec));
InterlockedIncrement(P.refCnt);
end;
end;
D := Pointer(Dest);
Pointer(Dest) := S;
_UStrClr(D);
end;
The key part of this is the call to _NewUnicodeString which of course calls GetMem. I am not at all surprised that heap allocation is significantly slower than comparison of 5 characters.
Put 'Hello' const into a variable and use it for setting then do a test again