If FindNextUrlCacheEntry() fails, how can I retrieve info of the failed entry again? - caching

I got a ERROR_INSUFFICIENT_BUFFER error when invoking FindNextUrlCacheEntry(). Then I want to retrieve the failed entry again, using a enlarged buffer. But I found that when I invoke FindNextUrlCacheEntry(), it seems I was retrieving the one next to the failed entry. Is there any approach I can go back to retrieve the information of the just failed entry?

I also observed the same behavior on XP. I am trying to clear IE cache programmatically using WinInet APIs. The code at the following MSDN link works perfectly fine on Win7/Vista but deletes cache files in batches(multiple runs) on XP. On debugging I found that API FindNextUrlCacheEntry gives different sizes for the same entry when executed multiple times.
MSDN Link: http://support.microsoft.com/kb/815718
Here is what I am doing:
First of all I make a call to determine the size of the next URL entry
fSuccess = FindNextUrlCacheEntry(hCacheHandle, 0, &cacheEntryInfoBufferSizeInitial) // cacheEntryInfoBufferSizeInitial = 0 at this point
The above call returns false with error no as INSUFFICIENT_BUFFER and with cacheEntryInfoBufferSizeInitial parameter set equal to the size of the buffer required to retrieve the cache entry, in bytes. After allocating the required size (cacheEntryInfoBufferSizeInitial) I call the same WinInet API again expecting it to retrieve the entry successfully this time. But sometimes it fails. I see that the cases in which API fails again even though with required buffered sizes (as determined it only) because it expects morebytes then what it retrieved earlier. Most of times the difference is of few bytes but I have also seen cases where the difference is almost 4 to 5 times.

For what it's worth this seems to be solved in Vista.

Related

Windows pintool mismatch between call/ret instructions

So i've been trying to write a pintool that monitors call/ret instructions but i've noticed that threre was a significant inconsistency between the two. For example ret instructions without previous call.
I've run the tool in a console application from which i got the following logs showing this inconsistency (this is an example, there are more inconsistencies like the one listed below in the other call/ret instructions):
1. Call from ntdll!LdrpCallInitRoutine+0x69, expected to return to 7ff88f00502a
2. RETURN to 7ff88f00502a
//call from ntdll!LdrpInitializeNode+0x1ac which is supposed to return at 7ff88f049385 is missing (the previous instruction)
3. RETURN to 7ff88f049385 (ntdll!LdrpInitializeNode+0x1b1)
The above are the first 3 log entries for the call/ret instructions. As one can see, the monitoring started a bit late, at the call found at ntdll!LdrpCallInitRoutine+0x69, it returned to the expected address but then returned to 7ff88f049385 without first tracking the call found in the previous instruction.
Any ideas of what could be the fault?
The program is traced with INS_AddInstrumentFunction with a callback that more or less does:
if INS_IsCall(ins) INS_InsertCall(ins,...
if INS_IsRet(ins) INS_InsertCall(ins,...
I've tried the same program on Linux which worked as expected, without any mismatch.
Any ideas of the reason behind this behavior?

writing partial data with libwebsockets

I'm using the libwebsockets v2.4.
The doc seems unclear to me about what I have to do with the returned value of the lws_write() function.
If it returns -1, it's an error and I'm invited to close the connection. That's fine for me.
But when it returns a value that is strictly inferior to the buffer length I pass, should I consider that I have to write the last bytes that could not be written later (in another WRITABLE callback occurrence). Is it even possible to have this situation?
Also, should I use the lws_send_pipe_choked() before using the lws_write(), considering that I always use lws_write() in the context of a WRITABLE callback?
My understanding is that lws_write always return the asked buffer length except is an error occurs.
If you look at lws_issue_raw() (from which the result is returned by lws_write()) in output.c (https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L157), you can see that if the length written by lws_ssl_capable_write() is less than the provided length, then the lws allocate a buffer to fill up the remaining bytes on wsi->trunc_alloc, in order for it to be sent in the future.
Concerning your second question, I think it is safe to call lws_write() in the context of a WRITABLE callback without checking if the pipe is choked. However, if you happen to loop on lws_write() in the callback, lws_send_pipe_choked() must be called in order to protect the subsequent calls to lws_write(). If you don't, you might stumble upon this assertion https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L83 and the usercode will crash.

ReadFile !=0, lpNumberOfBytesRead=0 but offset is not at the end of the file

We're struggling to understand the source of the following bug:
We have a call to "ReadFile" (Synchronous) that returns a non-zero value (success) but fills the lpNumberOfBytesRead parameter to 0. In theory, that indicates that the offset is outside the file but, in practice, that is not true. GetLastError returns ERROR_SUCCESS(0).
The files in question are all on a shared network drive (Windows server 2016 + DFS, windows 8-10 clients, SMBv3). The files are used in shared mode. In-file locking (lockFileEx) is used to handle concurrent file access (we're just locking the first byte of the file before any read/write).
The handle used is not fresh: it isn't created locally in the functions but retrieved from a application-wide "file handle cache manager". This means that it could have been created (unused) some times ago. However, everything we did indicates the handle is valid at the moment of the call: GetLastError returns 0, GetFileInformationByHandle returns "true" and a valid structure.
The error is logged to a file that is located on the same file server as the problematic files.
We have done a lot of logging and testing around this issue. here are the additional facts we gathered:
Most (but not all) of the problematic read happen at the very tail of the file: we're reading the last record. However, the read is still within the file GetlastError does not return ERROR_HANDLE_EOF. If the program is restarted, the same read with the same parameters works.
The issue is not temporary: repeated calls yield the same result even if we let the program loop indefinitely. Restarting the program, however, does not automatically leads to the issue being raised again immediately.
We are sure the offset if inside the file: we check the actual file pointer location after the failure and compare it with the expected value as well as the size of the file as reported by the OS: all matches across multiple retries.
The issue only shows up randomly: there is no real pattern to the program working as expected and the program failing. It occurs a 2-4 times a day in our office (about 20 people).
The issue does not only occurs in our network. we've seen the symptoms and the log entries in multiple locations although we have no clear view of the OS involved in these cases.
We just deployed a new version of the program that will attempt to re-open the file in case of failure but that is a workaround, not a fix: we need to understand what is happening here and I must admit that I found no rational explanation for it
Any suggestion about what could be the cause of this error or what other steps could be taken to find out will be welcome.
Edit 2
(In the light of keeping this clear, I removed the code: the new evidence gives a better explanation of the issue)
We managed to get a procmon trace while the problem was happening and we got the following sequence of events that we simply cannot explain:
Text version:
"Time of Day","Process Name","PID","Operation","Path","Result","Detail","Command Line"
"9:43:24.8243833 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","END OF FILE","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"
"9:43:24.8244011 AM","wacprep.exe","33664","QueryStandardInformationFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","AllocationSize: 7'094'272, EndOfFile: 7'092'864, NumberOfLinks: 1, DeletePending: False, Directory: False","O:\WinEUR\wacprep.exe /company:GIT18"
(there are thousands of these logged since the application is in an infinite loop.)
As we understand this, the ReadFile call should succeed: the offset is well within the boundary of the file. Yet, it fails. ProcMon reports END OF FILEalthough I suspect it's just because ReadFile returned != 0 and reported 0 bytes read.
While the loop was running, we managed to unblock it by increasing the size of the file from a different machine:
"Time of Day","Process Name","PID","Operation","Path","Result","Detail","Command Line"
"9:46:58.6204637 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","END OF FILE","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"
"9:46:58.6204810 AM","wacprep.exe","33664","QueryStandardInformationFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","AllocationSize: 7'094'272, EndOfFile: 7'092'864, NumberOfLinks: 1, DeletePending: False, Directory: False","O:\WinEUR\wacprep.exe /company:GIT18"
"9:46:58.7270730 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"

GetModuleInformation() function returned address and size information

GetModuleHandle(), GetModuleInformation() return address and size information on all the loaded modules in an app. I am only interested in the first module (exe) but when using ReadProcessMemory() and calling it more than once (using the same handles and process ID) and comparing the passes I get a few differences each time.
I was expecting memory address returned to be just a code segment however this appears not to be the case. Does the module memory address and size returned by GetModuleInformatiob() include code and data?
I have tried looking around for a full description on the windows app load process but cannot find anything.
Does the module memory address and size returned by GetModuleInformation() include code and data?
Yes.

How to limit the memory that an application can allocate

I need a way to limit the amount of memory that a service may allocate in order to prevent the service from starving the system, similar to the way SQL Server allows you to set "Maximum server memory".
I know SetProcessWorkingSetSize doesn't do exactly what I want, but I'm trying to get it to behave the way that I believe it should. Regardless of the values that I use, my test app's working set is not limited. Further, if I call GetProcessWorkingSetSize immediately afterwards, the values returned are not what I previously specified. Here's the code used by my test app:
var
MinWorkingSet: SIZE_T;
MaxWorkingSet: SIZE_T;
begin
if not SetProcessWorkingSetSize(GetCurrentProcess(), 20, 12800 ) then
RaiseLastOSError();
if GetProcessWorkingSetSize(GetCurrentProcess(), MinWorkingSet, MaxWorkingSet) then
ShowMessage(Format('%d'#13#10'%d', [MinWorkingSet, MaxWorkingSet]));
No error occurs, but both the Min and Max values returned by GetProcessWorkingSetSize are 81,920.
I tried using SetProcessWorkingSetSizeEx using QUOTA_LIMITS_HARDWS_MAX_ENABLE ($00000004) in the Flags parameter. Unfortunately, SetProcessWorkingSetSizeEx fails with "Code 87. The parameter is incorrect" if I pass anything other than $00000000 in Flags.
I've also pursued using Job Objects to accomplish the same goal. I have memory limits working with Job Objects when launching a child process. However, I need the ability for a service to set its own memory limits rather than depending on a "launching" service to do it. So far, I haven't found a way for a single process to create a job object and then add itself to the job object. This always fails with Access Denied.
Any thoughts or suggestions?
The documentation of SetProcessWorkingSetSize function says:
dwMinimumWorkingSetSize [in]
...
This parameter must be greater than
zero but less than or equal to the maximum working set size. The
default size is 50 pages (for example, this is 204,800 bytes on
systems with a 4K page size). If the value is greater than zero but
less than 20 pages, the minimum value is set to 20 pages.
In case of a 4K page size, the imposed minimum value is 20 * 4096 = 81920 bytes which is the value you saw.
The values are specified in bytes.
To actually limit the memory for your service process, I think it's possible to create a new job (CreateJobObject), set the memory limit (SetInformationJobObject) and assign your current process to the job (AssignProcessToJobObject) in the service's start up routine.
Unfortunately, on Windows before 8 and Server 2012, this won't work if the process already belongs to a job:
Windows 7, Windows Server 2008 R2, Windows XP with SP3, Windows Server
2008, Windows Vista and Windows Server 2003: The process must not
already be assigned to a job; if it is, the function fails with
ERROR_ACCESS_DENIED. This behavior changed starting in Windows 8 and
Windows Server 2012.
If this is your case (ie. you get ERROR_ACCESS_DENIED on older Windows) check if the process is already assigned to a job (in which case, you're out of luck) but also make sure that it has the required access rights: PROCESS_SET_QUOTA and PROCESS_TERMINATE.

Resources