What's the fastest way to list all the exe files in a huge directory in Delphi? - windows

At the moment I'm doing something like this:
var
Files: TArray<String>;
if IncludeSubDirs then
Files := TDirectory.GetFiles(Path, '*.exe', TSearchOption.soAllDirectories)
else
Files := TDirectory.GetFiles(Path, '*.exe', TSearchOption.soTopDirectoryOnly);
Path is a user defined String that can point to any existing directory. For "big" directories with a ton of files and with IncludeSubDirs = True (C:\Windows\ for example) GetFiles takes a very long time (like 30+ secs).
What would be the fastest way to list all the exe files in a "big" directory under Windows with Delphi (if any)?

I did some benchmarking and for a huge directory FindFirst / FindNext is about 1.5 to 3% faster than using TDirectory. I would say than both are equivalent in speed (for my use case I saved about 1 sec per minute). I ended up using FindFirst / FindNext since you get results progressively and not all at once, memory management seemed better and it's easier to cancel midway. I also used a TThread to avoid blocking my UI.
This is what I ended up with:
procedure TDirectoryWorkerThread.AddToTarget(const Item: String);
begin
if (not Self.Parameters.DistinctResults) or (Self.Target.IndexOf(Item) = -1) then
Self.Target.Add(Item);
end;
procedure TDirectoryWorkerThread.ListFilesDir(Directory: String);
var
SearchResult: TSearchRec;
begin
Directory := IncludeTrailingPathDelimiter(Directory);
if FindFirst(Directory + '*', faAnyFile, SearchResult) = 0 then
begin
try
repeat
if (SearchResult.Attr and faDirectory) = 0 then
begin
if (Self.Parameters.AllowedExtensions = nil) or (Self.Parameters.AllowedExtensions.IndexOf(ExtractFileExt(SearchResult.Name)) <> -1) then
AddToTarget(Directory + SearchResult.Name);
end
else if Self.Parameters.IncludeSubDirs and (SearchResult.Name <> '.') and (SearchResult.Name <> '..') then
ListFilesDir(Directory + SearchResult.Name);
until Self.Terminated or (FindNext(SearchResult) <> 0);
finally
FindClose(SearchResult);
end;
end;
end;

Related

Error: "Cannot open file "20210609.log". The process cannot access the file because it is being used by another process"

I've got some third party code that writes to a log file one line at a time using the code below:
procedure TLog.WriteToLog(Entry: ansistring);
var
strFile: string;
fStream: TFileStream;
strDT: ansistring;
begin
if ((strLogDirectory<>'') and (strFileRoot<>'')) then
begin
if not(DirectoryExists(strLogDirectory)) then
ForceDirectories(strLogDirectory);
strFile:=strLogDirectory + '\' + strFileRoot + '-' + strFilename;
if FileExists(strFile) then
fStream:=TFileStream.Create(strFile, fmOpenReadWrite)
else
fStream:=TFileStream.Create(strFile, fmCreate);
fStream.Seek(0, soEnd);
if blnUseTimeStamp then
strDT:=formatdatetime(strDateFmt + ' hh:mm:ss', Now) + ' ' + Entry + chr(13) + chr(10)
else
strDT:=Entry + chr(13) + chr(10);
fStream.WriteBuffer(strDT[1], length(strDT));
FreeandNil(fStream);
end;
end;
This has previously been working fine at a client site but in the last few weeks it is now getting the error in the title.
There is no other process that should have the file open. I suspect it is Anti-Virus but the client claims they have disabled the AntiV and they still get the error.
The error ONLY seems to occur when the code is in a loop and writing lines fast.
WHAT I WANT TO KNOW:
Assuming it is not the Anti-Virus (or similar) causing the problem, could it be due to the operating system not clearing a flag (or something similar) before the next time it tries to write to the file?
WHAT I WANT TO KNOW: Assuming it is not the Anti-Virus (or similar) causing the problem, could it be due to the operating system not clearing a flag (or something similar) before the next time it tries to write to the file?
No, this is not a speed issue, or a caching issue. It is a sharing violation, which means there MUST be another open handle to the same file, where that handle has sharing rights assigned (or lack of) which are incompatible with the rights being requested by this code.
For example, if that other handle is not sharing read+write access, then this code will fail to open the file when creating the TFileStream with fmOpenReadWrite. If any handle is open to the file, this code will fail when creating the TFileStream with fmCreate, as that requests Exclusive access to the file by default.
I would suggest something more like this instead:
procedure TLog.WriteToLog(Entry: AnsiString);
var
strFile: string;
fStream: TFileStream;
strDT: AnsiString;
fMode: Word;
begin
if (strLogDirectory <> '') and (strFileRoot <> '') then
begin
ForceDirectories(strLogDirectory);
strFile := IncludeTrailingPathDelimiter(strLogDirectory) + strFileRoot + '-' + strFilename;
fMode := fmOpenReadWrite or fmShareDenyWrite;
if not FileExists(strFile) then fMode := fMode or fmCreate;
fStream := TFileStream.Create(strFile, fMode);
try
fStream.Seek(0, soEnd);
if blnUseTimeStamp then
strDT := FormatDateTime(strDateFmt + ' hh:mm:ss', Now) + ' ' + Entry + sLineBreak
else
strDT := Entry + sLineBreak;
fStream.WriteBuffer(strDT[1], Length(strDT));
finally
fStream.Free;
end;
end;
end;
However, do note that using FileExists() introduces a TOCTOU race condition. The file might be deleted/created by someone else after the existence is checked and before the file is opened/created. Best to let the OS handle this for you.
At least on Windows, you can use CreateFile() directly with the OPEN_ALWAYS flag (TFileStream only ever uses CREATE_ALWAYS, CREATE_NEW, or OPEN_EXISTING), and then assign the resulting THandle to a THandleStream, eg:
procedure TLog.WriteToLog(Entry: AnsiString);
var
strFile: string;
hFile: THandle;
fStream: THandleStream;
strDT: AnsiString;
begin
if (strLogDirectory <> '') and (strFileRoot <> '') then
begin
ForceDirectories(strLogDirectory);
strFile := IncludeTrailingPathDelimiter(strLogDirectory) + strFileRoot + '-' + strFilename;
hFile := CreateFile(PChar(strFile), GENERIC_WRITE, FILE_SHARE_READ, nil, OPEN_ALWAYS, 0, 0);
if hFile = INVALID_HANDLE_VALUE then RaiseLastOSError;
try
fStream := THandleStream.Create(hFile);
try
fStream.Seek(0, soEnd);
if blnUseTimeStamp then
strDT := FormatDateTime(strDateFmt + ' hh:mm:ss', Now) + ' ' + Entry + sLineBreak
else
strDT := Entry + sLineBreak;
fStream.WriteBuffer(strDT[1], Length(strDT));
finally
fStream.Free;
end;
finally
CloseHandle(hFile);
end;
end;
end;
In any case, you can use a tool like SysInternals Process Explorer to verify if there is another handle open to the file, and which process it belongs to. If the offending handle in question is being closed before you can see it in PE, then use a tool like SysInternals Process Monitor to log access to the file in real-time and check for overlapping attempts to open the file.

Wrong use of 'file of char'

Im having problems with this code, I have two file of char, one is filed with information about books, and the other is empty, i have to write in SAL some information from S and then show the total of how many books match the first 2 digits of the code and how many are R and how many are T. The code, does write the information form S to Sal, but when its supposed to show the totals it appears ERORR 100 on screen. I read about it and it says that it is a problem with 'Disk read error' and that *This error typically occurs, if you "seed" a non-existent record of a typed file and try to read/write it. *, i really dont undertand.
I've benn trying to figure it out, but I haven't been able to. I notice that if I dont put 'WHILE NOT EOF(S) DO' the error does not appear, but of course i need the while, if someone is able to point out my mistakes i would really apreciate it.
This is the code:
uses crt;
var
i : byte;
s,sal: file of char;
v,l1,l2: char;
cs,cn,cl: integer;
pn,ps,tot: integer;
BEGIN
cs:=0; cn:=0; i:=0; cl:=0;
Assign (s, 'C:\Users\te\Documents\s.txt');
{$I-}
Reset (s);
{$I+}
if IOResult <> 0 then
begin
writeln('Error');
halt(2);
end;
Assign (sal, 'C:\Users\te\Documents\sal.txt');
{$I-}
Rewrite (sal);
IOResult;
{$I+}
if IOResult <> 0 then
halt(2);
writeln('Please write the code of the book, only 2 digits');
read(L1);read(L2);
read(s,v);
while (not eof(s)) do
begin
for i:=1 to 2 do
read(s,v);
if (v = '0') then
begin
read(s,v);
if (v = '1') or (v = '2') then
begin
for i:=1 to 5 do
read(s,v);
if (v = 'R') then
begin
read(s,v);
cs:= cs + 1;
end
else
begin
if (v = 'T') then
begin
cn:= cn + 1;
read(s,v);
end;
end;
while (v <> '-') do
read(s,v);
while (v = '-') do
read(s,v);
if (v = L1) then
begin
write(sal, v);
read(s,v);
if (v = L2) then
begin
write(sal,v);
read(s,v);
cl:= cl + 1;
end;
end;
while ( v <> '/') do
begin
write(sal,v);
read(s,v);
end;
write(sal, '-');
end
else
begin
for i:= 1 to 5 do
read(s,v);
if (v = 'R') then
cs:= cs + 1
else
cn:= cn + 1;
if (v = L1) then
read(s,v);
if (v = L2) then
begin
cl:= cl + 1;
read(s,v);
end;
end;
end
else
begin
for i:= 1 to 5 do
read(s,v);
if (v = 'R') then
cs:= cs + 1
else
cn:= cn + 1;
if (v = L1) then
read(s,v);
if (v = L2) then
begin
cl:= cl + 1;
read(s,v);
end;
end;
end;
tot:= cs + cn;
ps:= (cs * 100) div tot;
pn:= (cn * 100) div tot;
writeln('TOTAL ',cl);
writeln();
writeln(ps,'% and',pn,'%');
The file S content:
02022013Rto kill a mockingbird-1301/02012014Tpeter pan-1001/02032013Thowto-2301/02012012Tmaze runner-1001/02012012Tmaze runner-1001/02012012Tmaze runner-1001/$
I really just need someone else's point of view on this code, I think maybe the algorithm is flawed.
Thanks
(After your edit, i see that your code now compiles w/o error in FPC, so I'm glad you've managed to fix the error yourself)
As this is obviously coursework, I'm not going to fix your code for you and in any case the wayEven so, I'm afraid you are going about this is completely wrong.
Basically, the main thing wrong with your code is that you are trying to control what happens as your read the source file character by character. Quite frankly, that's a hopeless way of trying to do it, because it makes the execution flow unnecessarily complicated and littered with ifs, buts and loops. It also requires you to keep mental track of what you are trying to do at any given step, and the resulting code is inherently not self-documenting - imagine if you came back to your code in six months, could you tell at a glance how it works and what it does? I certsinly couldn't personally.
You need to break the task down in a different way. Instead of analysing the problem from the bottom up ("If I read this character next, then what I need to do next is ...') do it from the top down: Although your input file is a file of char, it contains a series of strings, separated by a / character and finally terminated by a $ (but this terminator does not really matter). So what you need to do is to read these strings one-by-one; once you've got one, check whether it's the one you're looking for: if it is. process it however you need to, otherwise read the next one until you reach the end of the file.
Once you have successfully read one of the book strings, you can then split it up into the various fields it's composed of. The most useful function for doing this splitting is probably Copy, which lets you extract substrings from a string - look it up in the FPC help. I've included functions ExtractTitle and ExtractPreamble which show you what you need to do to write similar functions to extract the T/R code and the numeric code which follows the hyphen. Btw, if you need to ask a similar q in the future, it would be very helpful if you include a description of the layout and meaning of the various fields in the file.
So, what I'm going to show you is how to read the series of strings in your S.Txt by building them character-by-character. In the code below, I do this using a function GetNextBook which I hope is reasonable self-explanatory. The code uses this function in a while loop to fill the BookRecord string variable. Then, it simply writes the BookRecord to the console. What your code should do, of course, is to process the BookRecord contents to see if it is the one you are looking for and then do whether the remainder of your task is.
I hope you will agree that the code below is a lot clearer, a lot shorter and will be a lot easier to extend in future than the code in your q. They key to structuring a program this way is to break the program's task into a series of functions and procedures which each perform a single sub-task. Writing the program that way makes it easier to "re-wire" the program to change what it does, without having to rewrite the innards of the functions/procedures.
program fileofcharproject;
uses crt;
const
sContents = '02022013Rto kill a mockingbird-1301/02012014Tpeter pan-1001/02032013Thowto-2301/02012012Tmaze runner-1001/02012012Tmaze runner-1001/02012012Tmaze runner-1001/$';
InputFileName = 'C:\Users\MA\Documents\S.Txt';
OutputFileName = 'C:\Users\MA\Documents\Sal.Txt';
type
CharFile = File of Char; // this is to permit a file of char to be used
// as a parameter to a function/procedure
function GetNextBook(var S : CharFile) : String;
var
InputChar : Char;
begin
Result := '';
InputChar := Chr(0);
while not Eof(S) do begin
Read(S, InputChar);
// next, check that the char we've read is not a '/'
// if it is a '/' then exit this while loop
if (InputChar <> '/') then
Result := Result + InputChar
else
Break;
end;
end;
function ExtractBookTitle(BookRecord : String) : String;
var
p : Integer;
begin
Result := Copy(BookRecord, 10, Length(BookRecord));
p := Pos('-', Result);
if p > 0 then
Result := Copy(Result, 1, p - 1);
end;
procedure AddToOutputFile(var OutputFile : CharFile; BookRecord : String);
var
i : Integer;
begin
for i := 1 to Length(BookRecord) do
write(OutputFile, BookRecord[i]);
write(OutputFile, '/');
end;
function ExtractPreamble(BookRecord : String) : String;
begin
Result := Copy(BookRecord, 1, 8);
end;
function TitleMatches(PartialTitle, BookRecord : String) : Boolean;
begin
Result := Pos(PartialTitle, ExtractBookTitle(BookRecord)) > 0;
end;
var
i : Integer; //byte;
s,sal: file of char;
l1,l2: char;
InputChar : Char;
BookFound : Boolean;
cs,cn,cl: integer;
pn,ps,tot: integer;
Contents : String;
BookRecord : String;
PartialTitle : String;
begin
// First, create S.Txt so we don't have to make any assumptions about
// its contents
Contents := sContents;
Assign(s, InputFileName);
Rewrite(s);
for i := 1 to Length(Contents) do begin
write(s, Contents[i]); // writes the i'th character of Contents to the file
end;
Close(s);
cs:=0; cn:=0; i:=0; cl:=0;
// Open the input file
Assign (s, InputFileName);
{$I-}
Reset (s);
{$I+}
if IOResult <> 0 then
begin
writeln('Error');
halt(2);
end;
// Open the output file
Assign (sal, OutputFileName);
{$I-}
Rewrite (sal);
IOResult;
{$I+}
if IOResult <> 0 then
halt(2);
// the following reads the BookRecords one-by-one and copies
// any of them which match the partial title to sal.txt
writeln('Enter part of a book title, followed by [Enter]');
readln(PartialTitle);
while not Eof(s) do begin
BookRecord := GetNextBook(S);
writeln(BookRecord);
writeln('Preamble : ', ExtractPreamble(BookRecord));
writeln('Title : ', ExtractBookTitle(BookRecord));
if TitleMatches(PartialTitle, BookRecord) then
AddToOutputFile(sal, BookRecord);
end;
// add file '$' to sal.txt
write(sal, '$');
Close(sal);
Close(s);
writeln('Done, press any key');
readln;
end.

Reading text Files - single line vs. multiple lines

I am working on a particular scenario, where I have to read from a Text File, parse it, extract meaningful information from it, perform SQL queries with the information and then produce a reponse, output file.
I have about 3000 lines of code. Everything is working as expected. However I have been thinking of a connendrum that could possibly dissrupt my project.
The text file being read (lets call it Text.txt) may consist of a single line or multiple lines.
In my case, a 'line' is identified by its segment name - say ISA, BHT, HB, NM1, etc... each segment ending is identified by a special character '~'.
Now if the file consists of multiple lines (such that each line corresponds to a single segment); say:-
ISA....... ~
NM1....... ~
DMG....... ~
SE........ ~
and so on.... then my code essentially reads each 'line' (i.e. each segment), one at a time and stores it into a temp buffer using the following command :-
ReadLn(myFile,buffer);
and then performs evaluations based on each line. Produces the desired output. No problems.
However the issue is... what if the file consists of only a single line (consisting of multiple segments), represented as:-
ISA....... ~NM1....... ~DMG....... ~SE........ ~
then with my ReadLine command I read the entire line instead of each segment, one at a time. This doesn't work for my code.
I was thinking about creating an if, else statement pair...which is based on how many lines my Txt.txt file consists of..such as:-
if line = 1:-
then extract each segment at a time...seperated by the special character '~'
perform necessary tasks (3000 lines of code)
else if line > 1:-
then extract each line at a time (corresponding to each segment)
perform necessary tasks (3000 lines of code).
now the 3000 lines of code is repeated twice and I don't find it elegant to copy and paste all of that code twice.
I would appreciate if I could get some feedback on how to possibly solve this issue, such that, regardless of a one-line file or multiple-line file...when i proceed to evaluate, i only use one segment at a time.
There are many possible ways of doing this. Which is best for you might depend on how long these files are and how important performance is.
A simple solution is to just read characters one at a time until you hit your tilde delimiter.
The routine ReadOneItem below shows how this can be done.
procedure TForm1.Button1Click(Sender: TObject);
const
FileName = 'c:\kuiper\test2.txt';
var
MyFile : textfile;
Buffer : string;
// Read one item from text file MyFile.
// Load characters one at a time.
// Ignore CR and LF characters
// Stop reading at end-of-file, or when a '~' is read
function ReadOneItem : string;
var
C : char;
begin
Result := '';
// loop continues until break
while true do
begin
// are we at the end-of-file? If so we're done
if eof(MyFile) then
break;
// read in the next character
read ( MyFile, C );
// ignore CR and LF
if ( C = #13 ) or ( C = #10 ) then
{do nothing}
else
begin
// add the character to the end
Result := Result + C;
// if this is the delimiter then stop reading
if C = '~' then
break;
end;
end;
end;
begin
assignfile ( MyFile, FileName );
reset ( MyFile );
try
while not EOF(MyFile) do
begin
Buffer := ReadOneItem;
Memo1 . Lines . Add ( Buffer );
end;
finally
closefile ( MyFile );
end;
end;
I would use a file mapping via the Win32 API CreateFileMapping() and MapViewOfFile() functions, and then just parse the raw data as-is, scanning for ~ characters and ignoring any line breaks you might encounter in between each segment. For example:
var
hFile: THandle;
hMapping: THandle;
pView: Pointer;
FileSize, I: DWORD;
pSegmentStart, pSegmentEnd: PAnsiChar;
sSegment: AnsiString;
begin
hFile := CreateFile('Path\To\Text.txt', GENERIC_READ, FILE_SHARE_READ, nil, OPEN_EXISTING, 0, 0);
if hFile = INVALID_HANDLE_VALUE then RaiseLastOSError;
try
FileSize := GetFileSize(hFile, nil);
if FileSize = INVALID_FILE_SIZE then RaiseLastOSError;
if FileSize > 0 then
begin
hMapping := CreateFileMapping(hFile, nil, PAGE_READONLY, 0, FileSize, nil);
if hMapping = 0 then RaiseLastOSError;
try
pView := MapViewOfFile(hMapping, FILE_MAP_READ, 0, 0, FileSize);
if pView = nil then RaiseLastOSError;
try
pSegmentStart := PAnsiChar(pView);
pSegmentEnd := pSegmentStart;
I := 0;
while I < FileSize do
begin
if pSegmentEnd^ = '~' then
begin
SetString(sSegment, pSegmentStart, Integer(pSegmentEnd-pSegmentStart));
// use sSegment as needed...
pSegmentStart := pSegmentEnd + 1;
Inc(I);
while (I < FileSize) and (pSegmentStart^ in [#13, #10]) do
begin
Inc(pSegmentStart);
Inc(I);
end;
pSegmentEnd := pSegmentStart;
end else
begin
Inc(pSegmentEnd);
Inc(I);
end;
end;
if pSegmentEnd > pSegmentStart then
begin
SetString(sSegment, pSegmentStart, Integer(pSegmentEnd-pSegmentStart));
// use sSegment as needed...
end;
finally
UnmapViewOfFile(pView);
end;
finally
CloseHandle(hMapping);
end;
end;
finally
CloseHandle(hFile);
end;

delphi - calculate directory size API?

Anyone knows other way to get a directoy's size, than calculate it's size by counting file with file? I'm interested on some win32 API function. I've google it about this, but i didn't found relevant information so i'm asking here.
PS: I know how to calculate a directory size by using findfirst and findnext and sum all file's size.
Getting the size of one directory is a pretty big problem, mostly because it's something you can't define. Examples of problems:
Some filesystems, including NTFS and most filesystems on Linux support the notion of "hard links". That is, the very same file may show up in multiple places. Soft links (reparse points) pose the same problem.
Windows allows mounting of file systems to directories. Example: "C:\DriveD" might be the same thing as "D:\".
Do you want the file size on disk or the actual size of the file?
Do you care about actual DIRECTORY entries? They also take up space!
What do you do with files you don't have access to? Or directories you don't have permission to browse? Do you count those?
Taking all this into account means the only possible implementation is a recursive implementation. You can write your own or hope you find a ready-written one, but the end performance would be the same.
I don't think there's an API for this. I don't think Windows knows the answer. i.e. it would have to recursively search the tree and summarize each folder, using the technique that Marcelo indicated.
If there were such an API call available, then other things would use it too and Windows would be able to immediately tell you folder sizes.
I already had a function to list files of a folder (and subfolders) and one to read the file size. So, I only wrote a small procedure that puts these two together.
ListFilesOf
function ListFilesOf(CONST aFolder, FileType: string; CONST ReturnFullPath, DigSubdirectories: Boolean): TTSL;
{ If DigSubdirectories is false, it will return only the top level files,
else it will return also the files in subdirectories of subdirectories.
If FullPath is true the returned files will have full path.
FileType can be something like '*.*' or '*.exe;*.bin'
Will show also the Hidden/System files.
Source Marco Cantu Delphi 2010 HandBook
// Works with UNC paths}
VAR
i: Integer;
s: string;
SubFolders, filesList: TStringDynArray;
MaskArray: TStringDynArray;
Predicate: TDirectory.TFilterPredicate;
procedure ListFiles(CONST aFolder: string);
VAR strFile: string;
begin
Predicate:=
function(const Path: string; const SearchRec: TSearchRec): Boolean
VAR Mask: string;
begin
for Mask in MaskArray DO
if System.Masks.MatchesMask(SearchRec.Name, Mask)
then EXIT(TRUE);
EXIT(FALSE);
end;
filesList:= TDirectory.GetFiles (aFolder, Predicate);
for strFile in filesList DO
if strFile<> '' { Bug undeva: imi intoarce doua intrari empty ('') }
then Result.Add(strFile);
end;
begin
{ I need this in order to prevent the EPathTooLongException (reported by some users) }
if aFolder.Length >= MAXPATH then
begin
MesajError('Path is longer than '+ IntToStr(MAXPATH)+ ' characters!');
EXIT(NIL);
end;
if NOT System.IOUtils.TDirectory.Exists (aFolder)
then RAISE Exception.Create('Folder does not exist! '+ CRLF+ aFolder);
Result:= TTSL.Create;
{ Split FileType in subcomponents }
MaskArray:= System.StrUtils.SplitString(FileType, ';');
{ Search the parent folder }
ListFiles(aFolder);
{ Search in all subfolders }
if DigSubdirectories then
begin
SubFolders:= TDirectory.GetDirectories(aFolder, TSearchOption.soAllDirectories, NIL);
for s in SubFolders DO
if cIO.DirectoryExists(s) { This solves the problem caused by broken 'Symbolic Link' folders }
then ListFiles(s);
end;
{ Remove full path }
if NOT ReturnFullPath then
for i:= 0 to Result.Count-1 DO
Result[i]:= TPath.GetFileName(Result[i]);
end;
GetFileSize
{ Works with >4GB files
Source: http://stackoverflow.com/questions/1642220/getting-size-of-a-file-in-delphi-2010-or-later }
function GetFileSize(const aFilename: String): Int64;
VAR
info: TWin32FileAttributeData;
begin
if GetFileAttributesEx(PWideChar(aFileName), GetFileExInfoStandard, #info)
then Result:= Int64(info.nFileSizeLow) or Int64(info.nFileSizeHigh shl 32)
else Result:= -1;
end;
Finally:
function GetFolderSize(aFolder: string; FileType: string= '*.*'; DigSubdirectories: Boolean= TRUE): Int64;
VAR
i: Integer;
TSL: TTSL;
begin
Result:= 0;
TSL:= ListFilesOf(aFolder, FileType, TRUE, DigSubdirectories);
TRY
for i:= 0 to TSL.Count-1 DO
Result:= Result+ GetFileSize(TSL[i]);
FINALLY
FreeAndNil(TSL);
END;
end;
Note that:
1. You can only count the size of some file types in a folder. For example in a folder containing BMP and JPEG files, if you want/need, you can only obtain the folder size only for BMP files.
2. Multiple filetypes are supported, like this: '.bmp;.png'.
3. You can choose if you want to read or not rea the size of the sub-folders.
Further improvements: You can massively reduce the size of the code by eliminating the GetFolderSize and moving the GetFileSize directly into ListFilesOf.
Guaranteed to work on Delphi XE7.
If you already know how to do another level, just use recursion to get every level. As your function traverses the current directory, if it comes across subdirectory, call the function recursively to get the size of that subdirectory.
You can use FindFile component. It will recursive the directories for you.
http://www.delphiarea.com/products/delphi-components/findfile/
Many of the previous examples of recurring find first and find next implementations do not consider the larger file capacities. Those examples do not produce correct results. A recursion routine that accommodates larger file capacities is available.
{*-----------------------------------------------------------------------------
Procedure: GetDirSize
Author: archman
Date: 21-May-2015
#Param dir: string; subdir: Boolean
#Return Int64
-----------------------------------------------------------------------------}
function TBCSDirSizeC.GetDirSize(dir: string; subdir: Boolean): Int64;
var
rec: TSearchRec;
found: Integer;
begin
Result := 0;
if dir[Length(dir)] <> '\' then
dir := dir + '\';
found := FindFirst(dir + '*.*', faAnyFile, rec);
while found = 0 do
begin
Inc(Result, rec.Size);
if (rec.Attr and faDirectory > 0) and (rec.Name[1] <> '.') and
(subdir = True) then
Inc(Result, GetDirSize(dir + rec.Name, True));
found := FindNext(rec);
end;
System.SysUtils.FindClose(rec);
end;
Please visit the link below for a complete explanation of the process.
http://bcsjava.com/blg/wordpress/2015/06/05/bcs-directory-size-delphi-xe8/

What does ShFileOperation do when the recycle bin is full?

I use this procedure:
function MoveToRecycle(sFileName: widestring): Boolean;
var
fos: TSHFileOpStructW;
begin
FillChar(fos, SizeOf(fos), 0);
with fos do
begin
wnd := 0;
wFunc := FO_DELETE;
pFrom := PWideChar(sFileName + #0 + #0);
pTo := #0 + #0;
fFlags := FOF_FILESONLY or FOF_ALLOWUNDO or FOF_NOCONFIRMATION or FOF_SILENT;
end;
Result := (ShFileOperationW(fos) = 0);
end;
What will happen if the recycle bin is full, does it return false or delete file permanently ?
Any help would be appreciated.
The best way to find out is to actually do it. Made my recycle bin be minimum 1 percent of drive. Created a bunch of large files and used your function to move them to recycle bin.
What I am finding out (on XP anyways) is that the function always moves it to the recycle bin; but deletes permanently the oldest deleted file. So it appears when the recycle bin fills up it employs a "first in - first out" type approach to decide which file to boot out.
I was not able to get the function to return false. Perhaps creating a file too large for the allocated recycle bin do this.

Resources