Prevent the access from copying a file when other resources are using it - windows

In my application, i have one exe file that will do some conversion on my videofiles in a directory, and also i have used cute ftp to transfer the files present in the directory to another server.
CUTE FTP is configured to be run on every mins.
When 25% of job is over for a video file, CUTEFTP is transferred that file to other server.
What are the ways to fix this problem.

Process the file in a different directory and then move it to a place where CUTE FTP will pick it up after the conversion is finished.
[EDIT] Don't use copy, use move. Both directories must be on the same harddisk. When using the Windows Explorer, use "Cut" or just drag the file with the mouse. Make sure there is no little "[+]" when you drop it.

Related

How can I mirror deleted duplicates from a source into a destination?

Here's the scenario: We have a computer running Windows 10 which has a directory that's backed up nightly. The backups are done with a batch file utilizing Robocopy and scheduled via Windows. The parameters are as such that the backup will always add any new files or existing file edits into the destination, but it will never delete files from the destination that have been deleted in the source. It essentially archives all files which are in the source directory at the end of each day.
Here's the tricky part. The source directory is very large, and occasionally someone finds a duplicate file (or several duplicates of a file) in it. When that happens, we need to delete all but one copy of the file, and then we need to access the backup directory manually, locate the file there, and do the same. This is tedious and time-consuming as it's not rare for someone to notice an entire subdirectory full of files that exist 5+ times each.
What we're looking for is a way to scan the source directory and all subdirectories inside for duplicate files and remove all but one copy of them, and then a way to reflect that into the destination. I've assumed that we will not be able to use Robocopy to reflect the changes in the destination due to the nature of the backup script it's running, but we do have the ability to run any third-party software on the destination directory as well, essentially running an action in both directories to clean each of them of duplicate files.
On that note, I'm not against using third-party tools to make this cleaner or more efficient, I'm just not aware of any.
There is one way to solve this problem I was also suffering from this problem. but I found that how to use "BATCH" file
There are mainly 2 command
X_COPY
ROBO_COPY
According to your need here, (1)x_copy will be helpfull
xcopywill backup your specific file or folder even if you changed some megabytes data, it will copy the new data and will not be replaced on previous data it will make new copy.
HOW TO DO
Open NotePad and type
xcopy "source file" "destination" /y/e/d/c/f/h/i/z/j
And then save your notepad as ".bat" file
for more requirement use below url
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/xcopy

Using IIB File nodes to move, copy, download and rename (S)FTP files without opening them

I am using IIB, and several of the requirements I have are for message flows that can do the following things:
Download a file from an FTP and/or SFTP server to the local file system, with a different name
Rename a file on the local file system
Move and rename a file on the (S)FTP server
Upload a file from the file system to the (S)FTP server, with a different name
Looking at the nodes available (FileInputNode, FileReadNode, FileOutputNode); it appears that they can read and write files in this way; but only by copying them into memory and then physically rewriting the files - rather than just using a copy/move/download-type command, which would never need to open the file in the same way.
I've noticed that there's options to move store files locally once the read is complete, however; so perhaps there's a way around it using that functionality? I don't need to open the files into memory at all - I don't care what's in the files.
Currently I am doing this using a Java Compute Node and Apache Commons Net classes for FTP - but they don't work for SFTP and the workaround seems too complex; so I was wondering if there was a pure IIB way to do it.
There is no native way to do this, but it can be done using Apache Commons VFS

How Windows differentiates between Copied files and Created files

I am looking for a bit of advice on how Windows file system differentiates between files that are copied(copy and pasted from another location) and files that are created (a new file created in a a folder).
A bit of background to this so it makes more sense: I have an application that is used to move files. The application will monitor a directory and when a file is placed in the directory it will move it elsewhere. However, I am having issues where the application will not pick up a file that is created within the monitored directory but will pick up files that have been created else where and are copied into the monitored directory.
Any advice on how Windows differentiates, or if it does at all, would be greatly appreciated.
This is running on Microsoft Windows Server 2008 R2 Standard. I can't dig into the code and see what is going on under the hood unfortunately, so need to get an idea of the difference if any there would be.
The filesystems don't know the operation of "copying" the file. Any copying is a sequence of file open/read/write/close operations. The same applies to moving to the different filesystem. Moving within the same filesystem, though, is an operation native to the filesystems and it can be done with one command to the filesystem.
Now about your problem. Most likely you catch the creation of the file (before the data is written), and when your application reacts, the file is still opened for writing. So you need to wait until the file is closed.
Depending on how you do monitoring, such waiting is done in different ways. In filesystem filters you wait for file close operation. With .NET FileSystemWatcher there's no way to track file close operation, but I saw a couple of tricks here on StackOverflow (don't have a link though, sorry).
A file existing in D: drive, from creation
The same file which was copied to E: drive
As you can see, the file which was copied to E: drive, has a creation time as the latest, when it was copied to and the modification time as the last modification time for that file in previous location.
So I guess this illustrates, how windows differentiates between copied files and created files.

Access DBF from FTP Server

I Have a application on Clipper,DBF.
Is It Possible to access DBF file from FTP Server Using Clipper Program?
Thanks.
No, not really.
FTP is not a "shared folder". You can create a batch file which will download file via FTP, then run your clipper application, then after application finishes copy modified .DBF back to FTP server.
Note however that it will be slow (as you need to transfer whole .DBF file even if you need just a tiny bit of it) and that you MAY NOT use it at more than one place at a time (running it in more than one place will overwrite data, including possibility of file truncation/corruption)
There are utilities that present FTP as a filesystem ("shared folder"), for example curlftpfs, but while it allows you to just run .exe directly instead of via aforementioned .bat file, ALL THE LIMITATIONS mentioned in previous paragraph still apply - it is not possible to avoid them due to nature of FTP.

Fastest way to move files within remote computer from Cocoa application?

I have files stored in a shared directory on one computer and a Cocoa Application running on another computer on the same LAN.
I want the application to move files within the shared directory.
I’m using -NSFileManager copyItemAtPath: toPath: error:. But sometimes it seems extremely slow, regardless of file size. Why would that operation be much longer than doing it directly on the shared directory’s computer?
I'd guess, I don't know for sure, that NSFileManager first downloads the file to copy and then reuploads the downloaded file under a different name. The last thing it does is removing the original file. Of course the downloading and uploading take some time.
The reason for this procedure is that most protocols don't have a 'copy' command. So the client will have to do all the work itself with the explained procedure.

Resources