I started using robocopy with /Z switch and log option. I started copying 109+ GB file , it is more than two days and still getting copied. Since I ran copy with log option (/LOG+) , I cannot see percentage of completion. Is it safe to open log , while copy is in progress to see percentage of completion. Do not want file copy interrupted by opening log file. Is it safe to copy the log file to a different location and open it. Can some one clarify me on this?
Whether or not it is possible to open it, depends on the program you're using for that matter: programs like Notepad and Notepad++ are able to open a file, while another process is still writing to it, MS-Word is not able to do that. The largest difference between Notepad and Notepad++ for this matter is that Notepad can't refresh the file (or reload from disk, as it is called in Notepad++).
In case you have a Linux subsystem on your PC, you might use the tail -f feature, which is written especially for this purpose.
Related
I'm looking for a method which would allow me to automatically open a modified file in windows. In other words, something running in the background which detects changes in a given set of files; such that when one is detected, the file is opened. I attempted writing a batch file which saves the last modified date and time to a text file and repeatedly checks that. I think this method works, but I didn't know if there's a better way out there.
My motivation is that I'm scp'ing files regularly from a linux machine to windows, and it would be neat if they just opened automatically on my windows machine after updating locally.
I'm looking for a editor in windows that constantly saves file.
In linux I do
cat>somefile
and then just start typing. somefile gets filled up as I type.
Is there editor or similar thing in WIndows? Preferably a non-dos tool?
I use WebStorm from JetBrains, which saves constantly and unobtrusively.
I really love it. I use it as a text editor and for my web development.
http://www.jetbrains.com/webstorm/
(and no I don't work there).
Its possible to install some unix features to windows.
Have a look at this CoreUtils
The shareware text editor UltraEdit by default works with using a temporary file which means create a copy of the file to edit in directory %TEMP% and copy this temporary file on save back to original file. The usage of a temporary file makes it possible to use Undo and Redo.
But it is possible at Advanced - Configuration - File Handling - Temporary Files to disable the usage of a temporary file for all files or just for large files depending on a threshold value in KB. All edits made on a file opened without usage of a temporary file are permanent which means immediately done on storage media.
Another feature of UltraEdit is automatic save in regular intervals which can be configured at Advanced - Configuration - File Handling - Save with or without making a backup on every save and even supporting version backups which means backups with an incrementing number on every save.
Last but not least on usage of a temporary file for editing a file as by default UltraEdit can recover last edits if UltraEdit crashes (uedit32.exe process killed with Windows task manager), or Windows crashes, or a sudden power loss occurs. The temporary file is updated quite often in the background by UltraEdit and therefore the restore on next start after an unexpected end of the editing session often restores nearly all edits made last on a file. The recovery feature includes also new files not being saved ever as file with a file name.
It would be interesting to know for offering a perhaps better solution why you want that any edit is immediately written to the file. In general this is the opposite of what users want on editing a text file and is not good on some storage medias like SSD hard disks.
I'm trying to reverse-engineer a program that does some basic parsing: text in, text out. I've got an executable "reference implementation" and the source code to what must be a different version, since the compiled source output != executable output.
The process creates and deletes temporary files very quickly in a multi-step parsing process. If I could take a look at the individual temporary files, I could get some great diagnostic data to narrow down where my source differs from the binary.
Is there any way to do any of the following?
Freeze a directory so that file creation will work but file deletion will fail silently?
Run a program in "slow motion" so that I can look at the files that it creates?
Log everything that a program does, including any data written out to files?
Running a tool like NTFS Undelete should give you the chance to recover the temporary files it's creating then deleting. Combine this with ProcMon from Sysinternals to get the right filenames.
You didn't mention what OS you're doing this on, but assuming you're using Windows...
You might be able to make use of SysInternals tools like Process Explorer and Process Monitor to get a better idea of the files being accessed. As far as I know, there's no "write-only" option on folders. For "slowing down" the files, you'd just need to use a slower computer. For logging, the SysInternals tools will help out quite a bit. Once you have a file name(s) that are being created, you could try preventing their deletion by opening the files in a stream from another process. That would prevent the system from being able to delete them.
There are two ways to attack this:
Run various small test cases through both systems and notice the differences. Since the test cases are small, you should be able to figure out why your code works differently than the executable.
Disassemble the executable and remove all the "delete temp file" instructions. Depending on how this works, this could be a very complex task (say when there is no central place where it happens).
In my application, i have one exe file that will do some conversion on my videofiles in a directory, and also i have used cute ftp to transfer the files present in the directory to another server.
CUTE FTP is configured to be run on every mins.
When 25% of job is over for a video file, CUTEFTP is transferred that file to other server.
What are the ways to fix this problem.
Process the file in a different directory and then move it to a place where CUTE FTP will pick it up after the conversion is finished.
[EDIT] Don't use copy, use move. Both directories must be on the same harddisk. When using the Windows Explorer, use "Cut" or just drag the file with the mouse. Make sure there is no little "[+]" when you drop it.
I have a Windows service application on Vista SP1 and I've found that users are renaming its executable file (while it's running) and then rebooting, thus causing it to fail to start on next bootup because the service manager can no longer find the exe file since it's been renamed.
I seem to recall that with older versions of Windows you couldn't do this because the OS placed a lock on the file. Even with Vista SP1 I still cannot copy over the existing file when it's running - Windows reports that the file is in use - makes sense. So why should I be allowed to rename it? What happens if Windows needs to page in a new code page from the exe but the file has been renamed since it was started? I ran Process Monitor while renaming the exe file, etc, but Process Mon didn't report anything strange and just logged changing the filename like any other file.
Does anyone know what's going on here behind the scenes? It's seem counter intuitive that Windows would allow a running process' filename (or its dependent DLLs) to be changed. What am I missing here?
your concept is wrong ... the filename is not the center of the file-io universe ... the handle to the open file is. the file is not moved to a different section of disk when you rename it, it's still in the same place and the part of the disk the internal data structure for the open file is still pointing to the same place. bottom line is that your observations are correct. you can rename a running program without causing problems. you can create a new file with the same name as the running program once you've renamed it. this is actually useful behavior if you want to update software while the software is running.
As long as the file is still there, Windows can still read from it - it's the underlying file that matters, not its name.
I can happily rename running executables on my XP machine.
The OS keeps an open handle to the .exe file,. Renaming the file simply changes some filesystem metadata about the file, without invalidating open handles. So when the OS goes to page in more code, it just uses the file handle it already has open.
Replacing the file (writing over its contents) is another matter entirely, and I'm guessing the OS opens with the FILE_SHARE_WRITE flag unset, so no other processes can write to the .exe file.
Might be a stupid question but, why do users have access to rename the file if they are not suppose to rename the file? But yeah, it's allowed because, as the good answers point out, the open handle to the file isn't lost until the application exits. And there are some uses for it as well, even though I'm not convinced updating an application by renaming its file is a good practice.
You might consider having your service listen to changes to the directory that your service is installed in. If it detects a rename, then it could rename itself back to what it's supposed to be.
There are two aspects to the notion of file here:
The data on the disk - that's the actual file.
The file-name (could be several or none) which you can give that data - called directory entries.
What you are renaming is the directory entry, which still references the same data. Windows doesn't care about your doing so, as it still can access the data when it needs to. The running process is mapped to the data, not the name.