Before anyone says it, I know this isn't the way it should be done, but it's the way it was done and I'm trying to support it without rewriting it all.
I can assure you this isn't the worst bit by far.
The problem occurs when the application reads an entire file into a string variable.
Normally this work OK because the files are small, but one user created a file of 107MB and that falls over.
intFreeFile = FreeFile
Open strFilename For Binary Access Read As intFreeFile
ReadFile = String(LOF(intFreeFile), " ")
Get intFreeFile, , ReadFile
Close intFreeFile
Now, it doesn't fall over at the line
ReadFile = String(LOF(intFreeFile), " ")
but on the
Get intFreeFile, , ReadFile
So what's going on here, surely the String has done the memory allocation so why would it complain about running out of memory on the Get?
Usually reading a file involves some buffering, which takes space. I'm guessing here, but I'd look at the space needed for byte to character conversion. VB6 strings are 16 bit, but (binary) files are 8 bit. You'll need 107MB for the file content, plus 214 MB for the converted results. The string allocation only reserves the 214 MB.
You do not need that "GET" call, just remove it, you are already putting the file into a string, so there is no need to use the GET call.
ReadFile = Input(LOF(intFreeFile), intFreeFile)
I got the same error . And we just checked the taskmanager showing 100% resource usage . we found out one of the update application was taking too much ram memory and we just killed it.
this solved the issue for me. One more thing was we gone in to config settings.
START->RUN->MSCONFIG
and go to startup tab and uncheck the application that looks like a updater application or some odd application that you dont use.
Related
Assume I have multiple processes writing large files (20gb+). Each process is writing its own file and assume that the process writes x mb at a time, then does some processing and writes x mb again, etc..
What happens is that this write pattern causes the files to be heavily fragmented, since the files blocks get allocated consecutively on the disk.
Of course it is easy to workaround this issue by using SetEndOfFile to "preallocate" the file when it is opened and then set the correct size before it is closed. But now an application accessing these files remotely, which is able to parse these in-progress files, obviously sees zeroes at the end of the file and takes much longer to parse the file.
I do not have control over the this reading application so I can't optimize it to take zeros at the end into account.
Another dirty fix would be to run defragmentation more often, run Systernal's contig utility or even implement a custom "defragmenter" which would process my files and consolidate their blocks together.
Another more drastic solution would be to implement a minifilter driver which would report a "fake" filesize.
But obviously both solutions listed above are far from optimal. So I would like to know if there is a way to provide a file size hint to the filesystem so it "reserves" the consecutive space on the drive, but still report the right filesize to applications?
Otherwise obviously also writing larger chunks at a time obviously helps with fragmentation, but still does not solve the issue.
EDIT:
Since the usefulness of SetEndOfFile in my case seems to be disputed I made a small test:
LARGE_INTEGER size;
LARGE_INTEGER a;
char buf='A';
DWORD written=0;
DWORD tstart;
std::cout << "creating file\n";
tstart = GetTickCount();
HANDLE f = CreateFileA("e:\\test.dat", GENERIC_ALL, FILE_SHARE_READ, NULL, CREATE_ALWAYS, 0, NULL);
size.QuadPart = 100000000LL;
SetFilePointerEx(f, size, &a, FILE_BEGIN);
SetEndOfFile(f);
printf("file extended, elapsed: %d\n",GetTickCount()-tstart);
getchar();
printf("writing 'A' at the end\n");
tstart = GetTickCount();
SetFilePointer(f, -1, NULL, FILE_END);
WriteFile(f, &buf,1,&written,NULL);
printf("written: %d bytes, elapsed: %d\n",written,GetTickCount()-tstart);
When the application is executed and it waits for a keypress after SetEndOfFile I examined the on disc NTFS structures:
The image shows that NTFS has indeed allocated clusters for my file. However the unnamed DATA attribute has StreamDataSize specified as 0.
Systernals DiskView also confirms that clusters were allocated
When pressing enter to allow the test to continue (and waiting for quite some time since the file was created on slow USB stick), the StreamDataSize field was updated
Since I wrote 1 byte at the end, NTFS now really had to zero everything, so SetEndOfFile does indeed help with the issue that I am "fretting" about.
I would appreciate it very much that answers/comments also provide an official reference to back up the claims being made.
Oh and the test application outputs this in my case:
creating file
file extended, elapsed: 0
writing 'A' at the end
written: 1 bytes, elapsed: 21735
Also for sake of completeness here is an example how the DATA attribute looks like when setting the FileAllocationInfo (note that the I created a new file for this picture)
Windows file systems maintain two public sizes for file data, which are reported in the FileStandardInformation:
AllocationSize - a file's allocation size in bytes, which is typically a multiple of the sector or cluster size.
EndOfFile - a file's absolute end of file position as a byte offset from the start of the file, which must be less than or equal to the allocation size.
Setting an end of file that exceeds the current allocation size implicitly extends the allocation. Setting an allocation size that's less than the current end of file implicitly truncates the end of file.
Starting with Windows Vista, we can manually extend the allocation size without modifying the end of file via SetFileInformationByHandle: FileAllocationInfo. You can use Sysinternals DiskView to verify that this allocates clusters for the file. When the file is closed, the allocation gets truncated to the current end of file.
If you don't mind using the NT API directly, you can also call NtSetInformationFile: FileAllocationInformation. Or even set the allocation size at creation via NtCreateFile.
FYI, there's also an internal ValidDataLength size, which must be less than or equal to the end of file. As a file grows, the clusters on disk are lazily initialized. Reading beyond the valid region returns zeros. Writing beyond the valid region extends it by initializing all clusters up to the write offset with zeros. This is typically where we might observe a performance cost when extending a file with random writes. We can set the FileValidDataLengthInformation to get around this (e.g. SetFileValidData), but it exposes uninitialized disk data and thus requires SeManageVolumePrivilege. An application that utilizes this feature should take care to open the file exclusively and ensure the file is secure in case the application or system crashes.
We're struggling to understand the source of the following bug:
We have a call to "ReadFile" (Synchronous) that returns a non-zero value (success) but fills the lpNumberOfBytesRead parameter to 0. In theory, that indicates that the offset is outside the file but, in practice, that is not true. GetLastError returns ERROR_SUCCESS(0).
The files in question are all on a shared network drive (Windows server 2016 + DFS, windows 8-10 clients, SMBv3). The files are used in shared mode. In-file locking (lockFileEx) is used to handle concurrent file access (we're just locking the first byte of the file before any read/write).
The handle used is not fresh: it isn't created locally in the functions but retrieved from a application-wide "file handle cache manager". This means that it could have been created (unused) some times ago. However, everything we did indicates the handle is valid at the moment of the call: GetLastError returns 0, GetFileInformationByHandle returns "true" and a valid structure.
The error is logged to a file that is located on the same file server as the problematic files.
We have done a lot of logging and testing around this issue. here are the additional facts we gathered:
Most (but not all) of the problematic read happen at the very tail of the file: we're reading the last record. However, the read is still within the file GetlastError does not return ERROR_HANDLE_EOF. If the program is restarted, the same read with the same parameters works.
The issue is not temporary: repeated calls yield the same result even if we let the program loop indefinitely. Restarting the program, however, does not automatically leads to the issue being raised again immediately.
We are sure the offset if inside the file: we check the actual file pointer location after the failure and compare it with the expected value as well as the size of the file as reported by the OS: all matches across multiple retries.
The issue only shows up randomly: there is no real pattern to the program working as expected and the program failing. It occurs a 2-4 times a day in our office (about 20 people).
The issue does not only occurs in our network. we've seen the symptoms and the log entries in multiple locations although we have no clear view of the OS involved in these cases.
We just deployed a new version of the program that will attempt to re-open the file in case of failure but that is a workaround, not a fix: we need to understand what is happening here and I must admit that I found no rational explanation for it
Any suggestion about what could be the cause of this error or what other steps could be taken to find out will be welcome.
Edit 2
(In the light of keeping this clear, I removed the code: the new evidence gives a better explanation of the issue)
We managed to get a procmon trace while the problem was happening and we got the following sequence of events that we simply cannot explain:
Text version:
"Time of Day","Process Name","PID","Operation","Path","Result","Detail","Command Line"
"9:43:24.8243833 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","END OF FILE","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"
"9:43:24.8244011 AM","wacprep.exe","33664","QueryStandardInformationFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","AllocationSize: 7'094'272, EndOfFile: 7'092'864, NumberOfLinks: 1, DeletePending: False, Directory: False","O:\WinEUR\wacprep.exe /company:GIT18"
(there are thousands of these logged since the application is in an infinite loop.)
As we understand this, the ReadFile call should succeed: the offset is well within the boundary of the file. Yet, it fails. ProcMon reports END OF FILEalthough I suspect it's just because ReadFile returned != 0 and reported 0 bytes read.
While the loop was running, we managed to unblock it by increasing the size of the file from a different machine:
"Time of Day","Process Name","PID","Operation","Path","Result","Detail","Command Line"
"9:46:58.6204637 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","END OF FILE","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"
"9:46:58.6204810 AM","wacprep.exe","33664","QueryStandardInformationFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","AllocationSize: 7'094'272, EndOfFile: 7'092'864, NumberOfLinks: 1, DeletePending: False, Directory: False","O:\WinEUR\wacprep.exe /company:GIT18"
"9:46:58.7270730 AM","wacprep.exe","33664","ReadFile","\\office.git.ch\dfs\Data\EURDATA\GIT18\JNLS.DTA","SUCCESS","Offset: 7'091'712, Length: 384, Priority: Normal","O:\WinEUR\wacprep.exe /company:GIT18"
I'm trying to find current flag count in KMines by using gdb. I know that I should look for memory mappings first to avoid non-existent memory locations. So I ran info proc mappings command to see the memory segments. I picked up a random memory gap (0xd27000-0x168b000) from the result and executed the find command like this: find 0x00d27000, 0x0168b000, 10
But I got the warning: Unable to access 1458 bytes of target memory at 0x168aa4f, halting search. error. Although the address 0x168aa4f is between 0xd27000 and 0x168b000, gdb says that it can't access to it. Why does this happen? What can I do to avoid this situation? Or is there a way to ignore unmapped/unaccessible memory locations?
Edit: I tried to set the value of the address 0x168aa4f to 1 and it works, so gdb can actually access that address but gives error when used with the find command. But why?
I guess I have solved my own problem, I can't believe how simple the solution was. The only thing I did was to decrease the 2nd parameter's value by one. So the code should be find 0x00d27000, 0x0168afff, 10 because linux allocates the memory by using maps in [x,y) format, so if the line in root/proc/pid/maps says something like this;
01a03000-0222a000 rw-p
The memory allocated includes 0x01a03000 but not 0x0222a000. Hope this silly mistake of mine helps someone :D
Edit: The root of the problem is the algorithm implemented in target.c(gdb's source code I mean) the algorithm reads and searches the memory as chunks at the size of 16000 bytes. So even if the last byte of the chunk is invalid, gdb will throw the entire chunk into the trash and won't even give any proper information about the invalid byte, it only reports the beginning of the current chunk.
I'm working on a VBScript web application that has a newly-introduced requirement to talk to the registry to pull connection string information instead of using hard-coded strings. I'm doing performance profiling because of the overhead this will introduce, and noticed in Process Monitor that reading the value returns two BUFFER OVERFLOW results before finally returning a success.
Looking online, Mark Russinovich posted about this topic a few years back, indicating that since the size of the registry entry isn't known, a default buffer of 144 bytes is used. Since there are two buffer overflow responses, the amount of time taken by the entire call is approximately doubled (and yes, I realize the difference is 40 microseconds, but with 1,000 or more page hits per second, I'm willing to invest some time in optimization).
My question is this: is there a way to tell WMI what the size of the registry value is before it tries to get it? Here's a sample of the code I'm using to access the registry:
svComputer = "." ' Local machine is simply "."
ivHKey = &H80000002 ' HKEY_LOCAL_MACHINE = &H80000002 (from WinReg.h)
svRegPath = "SOFTWARE\Path\To\My\Values"
Set oRegistry = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & svComputer & "\root\default:StdRegProv")
oRegistry.GetStringValue ivHKey, svRegPath, "Value", svValue
In VBScript strings are strings. They are however long they need to be. You don't pre-define their length. Also, if performance is that much of an issue for you, you should consider using a compiled instead of an interpreted language (or a cache for values you read before).
My solution in needed only in Win7/8-64bit.
Some program (I have no sources, 32-bit) loads some dll's. One of this dll is mine. I would like to search whole process memory with all loaded dll's for existence of some string (I would like to change one byte of this string in all occurrences)
I know there is WinAPI ReadProcessMemory, but since my dll is in the same address space, maybe I could read its memory just like that.
Opening process RAM in HxD program shows that addresses from 0x10000 to 0x21000 is readable. Then 0x41000 is not readable etc. I've tested it, and it gives me Reading Memory error when reading 0x4100 from dll.
Is it possible to read all process data without use of ReadProcessMemory? How to know which addresses are readable?