Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have to develop a Java application that has to read some files on the network, edit them and put them back.
The problem is that I always did (over the network) file operations through the FTP protocol. But, I recently heard about Webdav which is HTTP based.
Did anyone notice a difference (in terms of speed) between them ? Which one is the best ? Why did they "invent" Webdav if the FTP is good for that?
WebDAV has the following advantages over FTP:
By working via one TCP connection it's easier to configure it to bypass firewalls, NATs and proxies. In FTP the data channel can cause problems with proper NAT setup.
Again due to one TCP connection, which can be persistent, WebDAV would be a bit faster than FTP when transferring many small files - no need to make a data connection for each file.
GZIP compression is a standard for HTTP but not for FTP (yes, MODE Z is offered in FTP, but it's not defined in any standard).
HTTP has wide choice of authentication methods which are not defined in FTP. Eg. NTLM and Kerberos authentication is common in HTTP and in FTP it's hard to get proper support for them unless you write both client and server sides of FTP.
WebDAV supports partial transfers and in FTP partial uploads are not possible (ie. you can't overwrite a block in the middle of the file).
There's one more thing to consider (depending on whether you control the server) - SFTP (SSH File Transfer Protocol, not related to FTP in any way). SFTP is more feature-rich than WebDAV and SFTP is a protocol to access remote file systems, while WebDAV was designed with abstraction in mind (WebDAV was for "documents", while SFTP is for files and directories). SFTP has all benefits mentioned above for WebDAV and is more popular among both admins and developers.
Answer for question - Why did they "invent" Webdav
WebDAV stands for Web Distributed Authoring and Versioning.
Internet was just not meant for consumption of resources through urls (Uniform resource locator)
But that is what it became.
Because HTTP had strong semantics for fetching resources (GET) and (HEAD). (POST) provided coverage for number of semantic operations while (DELETE) was shrouded in distrust. HTTP lacked some other qualities like multi-resource operations.
In nutshell, it was read protocol and not write protocol.
You would go round about to make your resources (URLs) available for fetching by uploading it though FTP and many number of mechanisms.
WebDAV was supposed to provide the missing story of internet : Support for authoring resource through the same mechanism HTTP. It extended its semantics, introduced new HTTP VERBS.
It also introduced the mechanism to not only read, write, modify and delete a resource (uris) but also make inquires on the meta properties of the resource and modify it. It is not that you could not do it before but it was done through back door mechanism.
So you see, it brought some of the same mechanisms that you expect on file operations on desktop to internet resources.
Following are some of the analogies:
MKCOL ----- make collection ----- similar to make folder
PROPGET ---- get properties (meta?) --- same as get info or extended attributes on mac
PROPPATCH --- modify properties
COPY ---- cp
MOVE ---- mv
I hope , I have established some of the noble goals of WebDAV as extension to HTTP to support internet authoring. Not sure if we have achieved it though.
For your question
Your application is a client and will have to make do with what mechanism is available - FTP or WebDAV on the other side. If WebDAV is available great, you can use it. But It will take some time getting used to the semantics. FTP is has limited semantics and excels in simplicity. If you are already using it, don't change it.
Which is faster
That is akin to answering, which is faster HTTP or FTP?
On a sly note, if it was such an issue we wouldn't have been downloading / uploading files via HTTP ;)
Since DAV works over HTTP, you get all the benefits of HTTP that FTP cannot provide.
For example:
strong authentication, encryption, proxy support, and
caching.
It is true that you can get some of this through SSH, but the HTTP infrastructure is much more widely deployed than SSH. Further, SSH does not have the wide complement of tools, development libraries, and applications that HTTP does.
DAV transfers (well, HTTP transfers) are also more efficient than FTP.
You can pipeline multiple transfers through a single TCP connection,
whereas FTP requires a new connection for each file transferred (plus
the control connection).
Reference
Depends on what you want to do.
For example, the overhead on FTP for fetching a list of files is 7 bytes (LIST -a), while it's 370 bytes with Webdav (PROPFIND + 207 Multi Status).
For sending some file, the overhead is lower on FTP than on Webdav, and so on.
If you need to send/fetch a lot of small files, FTP will prove faster (using multiple connections for correct pipelining, and per-file TCP connection).
If you're sending/receiving big files, it's the same on both technology, the overhead will be negligible.
Please see:
http://www.philippheckel.com/files/syncany-heckel-thesis.pdf
Webdav has advantages over FTP regarding easy passing of firewalls (no separate control/data sockets). Speed should be roughly the same as both protocols transfer the file over a raw tcp socket.
file modification time:
there seems to be a difference how ftp and webdav deal with file modification time.
It seems there is a 'command' in ftp to preserve that time (several ftp clients and servers claim to do that), whereas webdav, if I remember correctly, can get the file modification date but can not set it on upload.
owncloud client and some propriatary webdav clients seem to have a workaround, but that works only in their software
depending on usage, that is a stong argument in favour of ftp. I don't want my files to have their modification date == upload date. After a later download, I would not be able to tell by date which version of the file I have.
Related
As I understand it, active and passive mode in FTP changes the port on which commands and data are sent from the client to the server which can be useful where firewalls are concerned. I think I'm also right in saying that SFTP doesn't have the same concept - but I'm not clear what nuances of the SFTP protocol make it unnecessary/undesirable to mimic that same pattern that exists in FTP.
Active/passive mode distinction in FTP protocol is needed, because in FTP, there's a separate transfer channel/connection for file transfers. And in different network setups, a different mode might be needed (though nowadays, mostly passive mode it used).
It's not useful where firewalls are concerned, it's a problem where firewalls are concerned. This concept of a separate connection on a separate port was probably not a good idea, as I do not think that this model was ever repeated again in any other similar protocol. Wikipedia FTP article mentions that FTP was designed this way because originally it was not intended to operate over TCP/IP (FTP originated in 1971).
In SFTP, there's nothing like that. All happens within one connection. So there are no problems "where firewalls are concerned".
I have implemented a very minimal proof-of-concept supporting a portion of the WebDAV protocol. This includes the OPTIONS, PROPFIND and GET HTTP verbs. The built-in Windows WebDAV client (on Windows 8.1) can therefore open the WebDAV share, list files and directories, and navigate through these.
The GET HTTP verb implementation provides the Accept-Ranges (as bytes), Content-Length, Content-Type and Transfer-Encoding (as chunked). When opening a large video file in a browser, it will begin to play immediately while it is downloading the remaining contents. The built-in WebDAV client of Windows seems to be downloading the entire file to a temporary location prior to having a media player play the file. When a file is 10GB, this is going to suck.
Is there any way to provide support so that the built-in WebDAV client can read ranges of bytes for streaming purposes (I would imagine it just needs to translate to use Range somehow...)?
It sounds like you did all the correct things to indicate to client that streaming is possible, and range requests are possible. So if the client doesn't respond do that, I think you can conclude that it just doesn't support those features. (which is a total bummer).
I have setup a windows 2003 ftp server and using chilkat to connect to this ftp inside my customized application. My application is developed in VB6 with ftp support of chilkat. The application works on different places of the city and connects to my ftp. Unable to access ftp and transfer files using the customised application, from some networks like idea netsetter / bsnl. It works perfect on other networks.
Thanks in advance.
Regards,
Sam
This is likely to be a firewall issue at the client end. FTP is often blocked by firewalls.
Just as well, FTP has its problems making it a less than ideal alternative. There are better options such as SFTP or FTPS but support for those is limited in Windows and you'll have to buy both server and client pieces to use one of them.
Fewer firewalls block HTTP and HTTPS though some are finicky enough to block traffic that doesn't look like Web browsing. Stiil, your odds of success go up substantially.
An obvious choice might be to use WebDAV. IIS supports WebDAV and it is pretty easy to write simple WebDAV client logic in VB6 based on one of the many HTTP components available. I'd probably use XmlHttpRequest or WinHttpRequest for that. A search ought to turn up several VB6 classes written to wrap one of them to support WebDAV client operations. You can also buy WebDAV client libraries.
Stick to using HTTPS (which means you need a server cetificate for IIS) and you won't have passwords going over the network in the clear. Even if you use HTTP you'll be no worse off than using FTP, plus it'll work through the vast majority of firewalls except those that specifically block non-browsing HTTP requests.
This could be a firewall configuration on the Client or Server. You're not going to be able to do much about the client, but for the server it may depend on whether your doing Active or Passive FTP connections.
If you are doing Active connections, make sure ports 20 and 21 are open.
If you're doing Passive connections, you may want to check out this article about configuring the PassivePortRange in Server 2003 FTP- http://support.microsoft.com/?id=555022.
I use FTP on a daily basis to work on multiple websites, but when I try to work from home, my darned satellite internet has a latency of about 1000ms. (Its craptastic service, I know, but there are no alternatives where I live.) Thus, I was wondering if there is a way that I can connect to my web server and transfer files that can accomodate this latency.
FTP "works", but it communicates very very slowly, and its a nightmare with multiple files. It takes the connection about 10-15 seconds to start the transfer, and another 5 seconds after the transfer is done. The transfer itself goes very fast as expected, but the handshake process does not, as the server/client seem to need to do a lot of communication to negotiate the transfer. Worse, it seems to need to do this handshake thing for every individual file, which certainly doesn't help.
Is there any way I can modify my FTP to make it work better over a high latency connection? If not, are there any other protocols or transfer services I might be able to use that could handle such an issue? Its the main fault I find with my ISP, and there's not a lot I've been able to find that I can do about it...
Thanks
Sounds like a good case for using UDP rather than TCP-based protocols - e.g. uftp
A quote from the linked site: "especially useful for data distribution over a satellite link (with two way communication), where the inherent delay makes any TCP based communication terribly inefficient".
A few options:
Sneaker-net. Use a USB key.
SCP. I'm almost positive it'll only authenticate/handshake once.
Tunnelling over SSH. The poor man's VPN. You'll be able to tunnel FTP or anything you like over the SSH connection. It'll be as fast as you're going to get and is very secure to boot.
Which is generally considered "best practice" when wanting to securely transmit flat files over the wire? Asymmetric encryption seems to be a pain in that you have to manage keysets at endpoints and make sure that the same algorithm is used by all clients, where as SFTP seems to be a pain because of NAT issues with encrypting the control channel, thus the router cannot translate IP. Is there a third-party solution that is highly recommended?
I believe you're talking about FTP with SSL when you say SFTP, and not the SFTP protocol that goes along with SSH. Use SFTP (the SSH version) as it doesn't require an encrypted control channel and will work fine over NAT. The SFTP page I linked to lists a number of graphical SFTP clients at the bottom of the page.
rsync is the best file transferring utility out there. Supports resume, recursion and a variety of encryption including ssh (the default). Like scp on steroids.
If you have multiple routers to punch through you can build ssh tunnels. It will only transfer parts of the file that are missing which make it great for backups. It has so many useful features I use it instead of cp for local copying.
It's available for many platforms and included by default on modern *nix systems. More at http://samba.anu.edu.au/rsync/
Use PGP / GPG and transfer the gpg-ed file directly via ftp or any other method.
Yah, I meant SSL FTP, not SFTP. "Management" is adverse to open-source, but if that's what the de-facto best practice is, then that's what to use...thanks for answers
With FTPS, you can generally switch to an unencrypted control channel via the CCC command after authentication. This approach means no problems with routers, while the data you are transferring will remain encrypted.