I have a VB6 program running on Windows 7. It is copying a large number of files and sometimes FileCopy fails with an access violation (between every 60 and 500 files).
I cannot reproduce it using a single file, only during such mass-copying operations this problem happens.
It makes no difference, if source/target are on hard disks, network shares or CD-ROMs.
What could trigger this problem?
EDIT: My question might be a little bit convoluted, so here's some more data:
Run 1:
Start copying 5.000 files
Access violation on file #983
Access violation on file #1437
Access violation on file #1499
Access violation on file #2132
Access violation on file #3456
Access violation on file #4320
Done
Run 2:
Start copying 5.000 files
Access violation on file #60
Access violation on file #3745
Done
Observations
The affected files are always different
The number of affected files tends to decrease if the same file batch is copied multiple times in succession.
Running as Administrator makes no difference
The application has read/write access to all necessary file system objects
This problem happens on Windows 7 workstations only!
Best guess: Is it possible that another user/application is using the specified file at the time the process is running? (anti-virus scanner, Win7 search indexing tool, windows defender, etc) You might try booting the machine in safe-mood to eliminate any of the background services/apps and try running the process to see.
Is there any consistency in the file types or size of the files causing the issue?
Is the machine low on resources? RAM/Disk Space
You said it occurs on Win7 – is it multiple Win7 machines or just one. (help to rule out system resources vs. software/OS)
Any hints from the event viewer (control panel > admin tools) – doubtful
Does the process take a long time to complete? If you can take the performance hit you might look at destroying and recreating the FSO object after every copy or every X files to make sure there isn’t some odd memory leak issue with Win7/VB6.
Not necessarily a recommended solution but if all else fails you could handle that error and save the files that trigger it in a dictionary/collection and reloop through the process with any those files when done. No guarantee it wouldn’t happen again.
Not enough information (as you probably know). Do you log the activity? If not, it's a good place to start. Knowing whether certain files are the problem, and if the issue is repeatable, can help narrow it down.
In your case I would also trap (and log) all errors and retry N times after waiting N seconds. You could be trying to copy in-use files locked by another process, and a retry may allow time for that lock to go away.
Really, more data is the key, and logging is the way to get it.
Is there any chance your antivirus program or some indexer is getting in the way?
Try creating a procmon trace while reproducing the error and see what is actually failing. With the trace you can see if there is another program causing the issue or if your app is trying to write somewhere it should't (incorrect permissions) or can't (a temp/scratch directory without enough space).
Check out the presentations linked to on the procmon page or Mark Russinovich's blog for some cool examples of using this tool to solve various Windows/application mysteries.
Is there a a hidden/system file in the directory that is potentially blocking it?
Does running the VB6 App with right-click "Run As Administrator" make a difference?
Is the point where it dies at the max # of files in the directory? e.g. Are you sure the upper limit on whatever loop structure you are using in VB6 is correct (Count vs count -1)?
Related
I have one Vbscript which runs continuously on my system to monitor a web page on Internet Explorer.
I have permanently deleted this Vbscript file from its original location on system by mistake, However the script is still in RAM and is still running and monitoring the web page.
This script is very important to me but I have lost it :(
I want to know if there is any way by which I can recover the code of Vbscript file from system's RAM or any temporary file as the script is still running.
I am not allowed to use any file recovery software, so please don't suggest to install any third party data recovery software.
Try using 'ADPlus.vbs' script from WinDbg:
1. http://msdn.microsoft.com/en-us/windows/hardware/hh852365
2. http://support.microsoft.com/kb/286350
As the code was running, I followed the below process to recover the running code:
Go to Task Manager
Select the process and create dump
Open online dump analyser (www.osronline.com)
Upload dump file
Download the dump analysis
The dump analysis provided almost 95% of the correct code. Code within some loops were distorted or changed. As I was the owner of the code I was able to correct it.
Use HxD, it can view all ram content relative to any process at fly. It is commonly used to hack currently running games etc.
After locating your script, it might be needed to clear alphanumeric mess between your code, N++ and regex knowledge may be useful.
We're building a Windows-based application that traverses a directory structure recursively, looking for files that meet certain criteria and then doing some processing on them. In order to decide whether or not to process a particular file, we have to open that file and read some of its contents.
This approach seems great in principle, but some customers testing an early version of the application have reported that it's changing the last-accessed time of large numbers of their files (not surprisingly, as it is in fact accessing the files). This is a problem for these customers because they have archive policies based on the last-accessed times of files (e.g. they archive files that have not been accessed in the past 12 months). Because our application is scheduled to run more frequently than the archive "window", we're effectively preventing any of these files from ever being archived.
We tried adding some code to save each file's last-accessed time before reading it, then write it back afterwards (hideous, I know) but that caused problems for another customer who was doing incremental backups based on a file system transaction log. Our explicit setting of the last-accessed time on files was causing those files to be included in every incremental backup, even though they hadn't actually changed.
So here's the question: is there any way whatsoever in a Windows environment that we can read a file without the last-accessed time being updated?
Thanks in advance!
EDIT: Despite the "ntfs" tag, we actually can't rely on the filesystem being NTFS. Many of our customers run our application over a network, so it could be just about anything on the other end.
The documentation indicates you can do this, though I've never tried it myself.
To preserve the existing last access time for a file even after accessing a file, call SetFileTime immediately after opening the file handle with this parameter's FILETIME structure members initialized to 0xFFFFFFFF.
From Vista onwards NTFS does not update the last access time by default. To enable this see http://technet.microsoft.com/en-us/library/cc959914.aspx
Starting NTFS transaction and rolling back is very bad, and the performance will be terrible.
You can also do
FSUTIL behavior set disablelastaccess 0
I don't know what your client minimum requirements are, but have you tried NTFS Transactions? On the desktop the first OS to support it was Vista and on the server it was Windows Server 2008. But, it may be worth a look at.
Start an NTFS transaction, read your file, rollback the transaction. Simple! :-). I actually don't know if it will rollback the Last Access Date though. You will have to test it for yourself.
Here is a link to a MSDN Magazine article on NTFS transactions which includes other links. http://msdn.microsoft.com/en-us/magazine/cc163388.aspx
Hope it helps.
I am trying to figure out when and how does Windows update File Access Times on files.
First of all, most Windows installs come with File Access Times disabled for performance reasons, so before wrapping your head around it here is what you need to do in order to activate last access times on NTFS file systems: modify the key [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem] value name NtfsDisableLastAccessUpdate to DWORD 0 value data(if it is set to 1 of course). If it doesn't exist just create it.
Upon reading File Times article on MSDN i am still in doubt as to how Windows updates access times.
My questions are as follow:
Do access times update upon issuing a WinApi CreateFile() with FILE_READ_ATTRIBUTES ? In my case, while doing it programmatically, it doesn't. Opening up the File Properties dialog of that file through the Explorer Shell does update the access time.
Do access times update upon issuing a WinApi ExtractIconEx() to read an icon from a file?
In my case doing so programatically, it doesn't. Opening up the File Properties dialog of that file through the Explorer Shell does update the access time.
If you ask me, both of those cases should update the file access times, but it seems to me that direct WinApi calls don't update them or Window/NTFS driver really lags behind, while operating on files from Windows Explorer do update pretty well. What do you think is or could be the issue here?
As a side note, i did do CloseHandle() as per:
The only guarantee about a file
timestamp is that the file time is
correctly reflected when the handle
that makes the change is closed.
My end conclusion is that, indeed the opinions lying around the web are true and Windows does update File Access Times in a random fashion and thus one really shouldn't in no way depend on Windows File Access Times.
Off-topic rant: Sorry forensics guys, you'll have to prove access times using another method or you can have your case invalided in seconds. :P
No, accessing the metadata of the file isn't going to change the last access time (name, attributes, timestamps). Wouldn't work well in practice, just looking at the directory with Explorer would change it. You have to actually open the file. ExtractIconEx() would normally be an excellent candidate, except that Windows can play tricks with it. A hidden desktop.ini file can redirect the icon to another file.
Using the last access time is pretty worthless for forensics. You'd need a file system filter driver. Similar to the one embedded in SysInternals' ProcMon utility. It might be using ETW btw, that got pretty powerful at Vista time. Nevertheless, your project just got 10 times more complicated.
UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:
For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.
How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:
Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.
What a nuisance!
Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...
...so can it be done?
No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.
Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).
Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:
Open a file
Delete it (but keep the file handle)
Use the file any way you like
Close the file
After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).
This design is the foundation of all Unix filesystems. The Windows file system NTFS works much the same way but the high level API is different. Many applications open files in exclusive mode (which prevents anyone, even backup programs) to read the file. This is even true for applications which just display information like PDF viewers.
That means you'll have to fix all the Windows applications to achieve the desired effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists, whether someone has made changes, etc.
According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:
Enables subsequent open operations on a file or device to request delete access.
Otherwise, other processes cannot open the file or device if they request delete access.
If this flag is not specified, but the file or device has been opened for delete access, the function fails.
Note Delete access allows both delete and rename operations.
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx
So if you're can control your applications you can use this flag.
Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.
Unlocker purports to do a lot more, and provides a helpful list of other tools.
Also deleting on reboot is an option (though this sounds like not what you want)
That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.
When I defragment my XP machine I notice that there is a block of "Unmovable Files". Is there a file attribute I can use to make my own files unmovable?
Just to clarify, I want a way to programmatically tell Windows that a file that I create should be unmovable. Is this possible, and if so, how can I do it?
Thanks,
Terry
A lot of system files cannot be moved after the system boots, such as the page file and registry database files.
This utility runs before Windows boots to defragment those files. I have it set to run at every boot, and it works well for me on several machines.
Note that the very first time you boot up with this utility set to run, it may take several minutes to defrag. After that first run though, it finishes in just 3 or 4 seconds.
Edit0: To respond to your clarification- that link says windows has marked the page file and registry files as open for exclusive access. So you should be able to do the same thing with the LockFile API Call. However, that's not an attribute of the file itself. You'd have to actually run some background program that locks the file for exclusive access.
There are no file attributes that you can place on your files to mark them as immovable. The only way that a file cannot be moved (I think) during defragmentation is to have some other process have the file open (for read or write, I'm not even sure that you need to have the file open in exclusive mode or not).
Quite frankly, I cannot think of a reason that you'd want your files not to move, unless you have specific requirements about where on the disk platter your files reside. Defragmentation should generally lead to faster disk access and that seems to be desireable in all cases :-)
This usually means that the file is in use by some process. If you're defragmenting, you'll likely see this with a lot of system files. If the file should legitimately be movable and is stuck (it's being held by a process that runs at startup but shouldn't be, for example), the most useful way of resolving the problem is to remove all permissions on the file, reboot, restore the permissions, and then get rid of the file/run the program that's trying to use it.
I suppose the ugly way is to have an application boot on startup, check every few seconds if defrag is running and if so open the file in exclusive mode.
This is really ugly and I don't recommend it unless there is no cleaner solution.
Terry, the answers all mention ways to prevent files from becoming unmovable during defragmentation. From your question it appears that you are in fact wanting to make your personal files unmovable. Can you please clarify what is appealing about making your files unmovable.
I assume you're using the defragger that comes with Windows. Some commercial ones like DiskKeeper can move some of these files (usually system files). You can try their trial versions.
Contig might serve your purpose http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx
I'm relatively certain I ran across some methods/attributes you could access programatically to do exactly what you want. This was back in NT4 days though and my memory isn't that good.
For a little more complete solution try Raxco's PerfectDisk. While it is a commercial product it does a very good job and supports boot time defrag of system files. The first defrag takes longer than say DiskKeeper but its a single pass defragger and supports defragging with very little free space left on the drive. Overall its a much smarter defrag program then any other I've seen and supports systems of any size.
http://www.raxco.com/
first try to move(or delete) the files within safe mode. If can not, try to move(or delete) the files with linux.
But be careful if those are the windows system files, then you are failed to boot up your windows.
Some reason why the files are unmovable are : the file size is too big, the files are being in open/in use condition, insufficient security privileges, being access by other computer/s, and many other things.