I need to download a lot of large text files. To save traffic, I came up with the idea to automatically compress them between the source server and my computer.
What services are there for this?
If you own the source server you can compress the files stored there and send the compressed files. If you do not own the source server, I don't think there is much you can do.
Related
I am using IIB, and several of the requirements I have are for message flows that can do the following things:
Download a file from an FTP and/or SFTP server to the local file system, with a different name
Rename a file on the local file system
Move and rename a file on the (S)FTP server
Upload a file from the file system to the (S)FTP server, with a different name
Looking at the nodes available (FileInputNode, FileReadNode, FileOutputNode); it appears that they can read and write files in this way; but only by copying them into memory and then physically rewriting the files - rather than just using a copy/move/download-type command, which would never need to open the file in the same way.
I've noticed that there's options to move store files locally once the read is complete, however; so perhaps there's a way around it using that functionality? I don't need to open the files into memory at all - I don't care what's in the files.
Currently I am doing this using a Java Compute Node and Apache Commons Net classes for FTP - but they don't work for SFTP and the workaround seems too complex; so I was wondering if there was a pure IIB way to do it.
There is no native way to do this, but it can be done using Apache Commons VFS
I have one application in which I download files from FTP server.
As the file is downloading, a third party begins uploading that file and so it ends up with a corrupt file and is unable to process it.
Does any know about how to deal with situation other than using .complete file mechanism? (keeping track of when the download is complete)
Is it possible to lock the file on the FTP server? The FTP server is windows.
No, there is no standard locking mechanism, it's all between up to you and the other party. Here are some ways to do it in addition to creating a .complete file;
The uploader uploads the file as file.xls.tmp, and when it's complete, rename to file.xls.
The uploader uploads to a tmp directory, and when it's complete, moves it to the scanned dir.
The uploader uploads the file, and the downloader scans file dates to find files written before a certain time. This is not as reliable, since a file from a crashed uploader may be scanned.
There are probably more versions, particularly with a custom ftp server, but using the plain standard doesn't allow for much "fancy stuff".
I have a large directory which I need to upload to a new host's server, but because I have never transferred such a large directory (32GB), I am wondering whether there is something I'm missing.
Now, I am assuming that the best way is to compress it into a zip file, upload to the server and then extract. But for some reason, my zip file is still about 32GB!
I have already attempted to start uploading the files and it has literally been taking about 30 hours to simply upload about 3GB! Obviously this is too long, so I wondered whether there is a better method of doing this?
Speed of upload is determined by your internet connection speed. Try to find different location with faster connection speed. It could be your work, school or internet caffee.
You can test your upload speed here: http://speedtest.net/
Pack everything into large zip file, upload it there, unpack remotely. It is faster then uploading file by file with ftp.
I want to know how dropBox is able to synchronize the large data files without replacing or re-uploading the files again to the dropbox server
Example: an encrypted zip archive
Suppose I've a 1GB encrypted zip archive file Fully synchronized on my computer and on the dropbox servers,
On my computer I added to that zip archive file a file of size about 5MB then saved the file on my computer,
dropbox is able to synchronize zip archive file without re-uploading the whole file again instead it just update it with the small change I've done.
Also TrueCrypt containers works in that manner
Any keywords, ideas, topics, reviews, links, code is greatly appreciated.
Dropbox uses the rsync algorithm to generate delta files with the difference from file A1 to file A2. Only the delta(usually much smaller than A2) is uploaded to the dropbox servers since dropbox already has file A1. The delta file can then be applied to file A1, turning it into file A2.
You can learn more about the algorithm here.
http://en.wikipedia.org/wiki/Rdiff-backup#Variations
The source code for the library behind the delta creation can be found here.
http://librsync.sourceforge.net/
My first thought (it's late sorry!) is that it might be performing a hash at a block level.
For example, it might generate a hash for each 64k segment and then uploads the whole segment for each portion that has a different hash.
As we all know (don't we?), the FTP functionality on Dreamweaver is inexcusable for a professional product, but I bear with it because Dreamweaver has other useful stuff that overshadows the FTP.
However, I have a specific FTP situation which has been annoying me for a few years now, and was hoping someone had a solution.
We use the ZEND encryption on some PHP files. Once you do that, the files are no longer text file (but instead, binary files).
My understanding is that Dreamweaver FTPs everything as binary (maybe I misunderstand?), but each time I upload (FTP) those ZEND-encrypted PHP files to a server using Dreamweaver, they do not work (just a white screen -- meaning they are corrupt).
I have to drop into the command-line FTP, FTP into the server, and manually PUT the files (after typing BIN of course). Not too hard, but adds extra steps I would rather avoid.
Is there any adjustment, tool, add-on, or ANYTHING that will force Dreamweaver to upload the files correctly?
Looks like you have to modify your FTPExtensionMap.txt, that's what Dreamweaver uses to select the FTP transfer mode.
Although that would make all .php files to be transferred in BINARY, which may not be what you want if you're transferring back & forth between Win/UNIX/Mac other non-encoded .php files
Here the instructions on how to modify the file.