Which FTP transfer modes are widely used? - ftp

Reading the FTP RFC (RFC959), I notice some modes that I've never seen used, and indeed don't seem to be implemented by popular FTP software (vsftpd for example). In particular, for the STRU command, only file mode "STRU F" is commonly used, and for the MODE command, only stream mode "MODE S" is commonly used.
So the question is, when following best practice for developing interoperable FTP client and server software:
Is it useful to support the other STRU options (record and page)? These seem like something very old fashioned.
Is it useful to support the other MODE options (block and compressed)? I can see the point in compressed, but I'm particularly wondering whether any clients/servers will expect block to be there.
Are there any surveys of which existing FTP implementations support which options?
(On the MODE one, I can see why compressed is useful, I'm more wondering about whether any clients/servers will expect block mode to be there).

I maintain a custom FTP server and regularly refer to http://cr.yp.to/ftp.html for these sort of questions. Specificly, I followed the suggestions for TYPE/MODE/STRU at http://cr.yp.to/ftp/type.html and so far have had no issues.
No client I've seen connect has sent an STRU request besides "STRU F". Similarly, I've only ever seen "MODE S".

I would suggest to search for open source FTP clients and servers (especially those still being actively updated) and look at how many of them implement these "obsolete" transfer modes.
I made once (about seven years ago) a FTP client and implemented just the most basic transfer modes (ASCII and binary, if I remember correctly). Never had a problem with any server when using it.

It sounds like you are mostly concerned with interoperability. The answer is a bit different between client and server.
For server, you want to implement the basic modes that clients use. For every client, you need to support a minimum of one configuration, so the number of combinations should be relatively low. Beyond the minimum, supporting active -and- passive mode would probably the major addition (the mozilla community has wanted passive support for a long time, and it is probably never going to happen).
If you are a client, providing good URL support and date/time handling is probably the biggest barrier.

Related

Existing Event Driven Network Protocols

I am building a set of programs that consist of multiple clients and a single server.
The clients are frequently pushing small packets of data to the server, which will validate the information (returning an error if the data is invalid), and process the received information. The information may then incur the firing of events, which clients will be subscribed to, allowing for clients to be instantly (or as close as possible) notified (along with a small amount of data).
I have some ideas about how to do this, but I am trying to avoid creating a protocol of my own, mainly as I'm sure it would take forever and I would probably make a few errors. So I was wondering if there are any existing protocols that I could implement into my system that would provide such functionality.
The number of clients will initially be quite small, but will be growing over time to potentially include 1000's of clients (with their own subscriptions), and several front end servers (each one handling a subset of subscriptions) parsing the information back and forth with back end servers for improved capability.
So, if anyone knows of any existing protocols that implement these requirements and functionality, that would be fantastic.
EDIT
I am currently looking at the XMPP protocol, and the JXTA protocol suite (for reference, and implement with another language). Both seem quite good and provide the necessary connectivity, but I have not had the opportunity to test each of them out in my environment, or if they are even suitable for what I am attempting.
Additionally, some of the network clients will be outside of the local network and operating over WAN. Security is not so much of an issue, but I need to take into account the increase latency of this, and firewall rules (local to the connection that is hosting the application and ISP firewalls) that could be blocking certain ports or transport protocols (I have read some text that said that some ISPs where blocking UDP packets, but not sure of how wide this goes. I can do it at home, the office, mobile, friends houses, etc and have yet to experience it myself).
I'm sorry if the following is not exactly what you're after but I am slightly confused by your use of the word 'protocol'. I understand a protocol to be a 'communication specification' only, where the implementation is left entirely to you. If that is the case I always find the the following graphic usefull, link.
If on the other hand you are looking for a solution which allows you to easily implement the networking side of your application, helping save time, then checkout the following network libraries, which implement their own custom protocol:
NetworkComms.Net
Lidgren
ZeroMQ
Disclaimer: I'm a developer for NetworkComms.Net

multi-client inter-process communication on Windows, VB6

What is the best way for multiple client programs to
communicate with a single server program, all running
on a single Windows computer? All written in VB6.
I'd appreciate recommendations of how you might solve
this problem.
NOTE: we are working on transition to .NET, but have to
add a capability to the V6B version before the .NET will
be ready.
The possibilities include TPC connections, named pipes,
shared memory, messages, files, and more.
A client passes the server a string as input, and the server
combines it with data known only to the server, to generate
another string which is returned to the client. Both strings
are only about 100 characters long. The server is contacted
only when a new file needs to be opened, and so it is a very
low volume of communication... probably a flurry of 10 calls
within 15 seconds, followed by an hour of idle time.
But it is possible that two clients would choose about the
same time to request information. Blocking/Locking are certainly
acceptable, as the server will be done with each request in
well under a second, and several seconds of delay is unimportant
to any of the programs.
The server's algorithm is complex, and for several reasons important
to the application should not be replicated in each helper program.
That is the reason for needing a server.
Background:
I am adding capability to a large existing legacy program.
This single program has several other legacy programs which
act as helpers and are run when the user makes certain
choices. These programs are started with a shell command,
and are not just separate threads. For instance, one helper
loads new data from a DVD drive onto the hard drive. Another
helper just displays a chart of the current positions of
the planets.
This is a LARGE commercial legacy program that happens to be
written in VB6. We are working to convert it and all the
helper programs to .NET, but must first release a new version
under vb6 with this added capability. (Please don't tell me
to not use VB6, as we are already moving elsewhere.)
We need a temporary VB6 solution.
VB6 does TCP and UDP extremely well via the standard Winsock Control component included in Pro and Enterprise Editions. A lot of shadetree coders do seem to struggle with it though. This is probably the most obvious route since the only other native IPC in VB6 would be COM/DCOM and DDE, however MSMQ provided excellent support for VB6 as well.
The downside of IP-based protocols is their limited namespace and resulting high probability of collisions (64K port numbers, many set aside for standard applications, ephemeral port ranges, etc.). They're also somewhat "heavyweight" but considering the vast resources of even the oldest PCs still in service and your light traffic requirements you can ignore that in deciding.
Another option you've considered is Named Pipes.
This offers a number of advantages in your situation. For one thing the namespace is much larger requiring only a unique name, which in the post-Win9x era can be up to 256 characters long making uniqueness fairly easy to achieve. For another, as long as your firewalls permit "File and Print Sharing" you're all set on that front.
Also, for your application you only seem to require an RPC-style mechanism rather than arbitrary bidirectional streams or messages. TransactNamedPipe() calls in your clients might be ideal. Named Pipes work over a LAN, but within one PC they are quite fast and light weight.
While VB6 doesn't come with a Named Pipe component such a thing is fairly easy to create as long as extremely high performance isn't required. You can use Timer-based polling in the server instead of trying to implement overlapped I/O to get asynchronicity. I put one together a couple of years ago and have had good luck with this approach.
I published a fairly stable rendition of this a while back at PipeRPC - RPC Over Named Pipes. There is an older and a somewhat newer version there with examples of use and documentation. As designed, clients make "calls" passing a Byte array of request parameters and receiving back a Byte array of response results. You can also shove Unicode Strings though with no changes, letting the compiler coerce the types.
Just one "drop in" UserControl for both clients and servers.
Looking back at this question:
The server's algorithm is complex, and for several reasons important
to the application should not be replicated in each helper program.
That is the reason for needing a server.
If that's really the concern why not just create a shared DLL that all programs use?
For a one-off upgrade release to an existing VB6 application being moved to a newer platform, I would stress keeping the modification as simple and straightforward as possible. As a result, I wouldn't go down any routes involving shared memory or anything relatively unusual.
A few options, none perfectly simple, but at least some ideas:
Expose a COM object in the server code that performs the translation, and can be consumed by the client apps. The clients instantiate the object from the server as an out-of-process object, and let COM handle all the marshalling, etc.
Does the server have any network awareness? VB6 doesn't do sockets/tcp natively very well, but if you've had a reason to add that in, you might be able to leverage it to perform a socket-based connection and data exchange.
The server and client could each poll a common resource folder for the presence of a specific file that constituted inbound/outbound requests for the translation service you describe. Not very elegant, but it might be the simplest.
Just a few ideas to give you some things to think about. Hope that's helpful in some way. Good luck!

is it reasonable to protect drm'd content client side

Update: this question is specifically about protecting (encipher / obfuscate) the content client side vs. doing it before transmission from the server. What are the pros / cons on going in an approach like itune's one - in which the files aren't ciphered / obfuscated before transmission.
As I added in my note in the original question, there are contracts in place that we need to comply to (as its the case for most services that implement drm). We push for drm free, and most content providers deals are on it, but that doesn't free us of obligations already in place.
I recently read some information regarding how itunes / fairplay approaches drm, and didn't expect to see the server actually serves the files without any protection.
The quote in this answer seems to capture the spirit of the issue.
The goal should simply be to "keep
honest people honest". If we go
further than this, only two things
happen:
We fight a battle we cannot win. Those who want to cheat will succeed.
We hurt the honest users of our product by making it more difficult to use.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
An extra bit of info: client environment is adobe air, multiple content types involved (music, video, flash apps, images).
So, is it reasonable to do like itune's fairplay and protect the media client side.
Note: I think unbreakable DRM is an unsolvable problem and as most looking for an answer to this, the need for it relates to it already being in a contract with content providers ... in the likes of reasonable best effort.
I think you might be missing something here. Users hate, hate, hate, HATE DRM. That's why no media company ever gets any traction when they try to use it.
The kicker here is that the contract says "reasonable best effort", and I haven't the faintest idea of what that will mean in a court of law.
What you want to do is make your client happy with the DRM you put on. I don't know what your client thinks DRM is, can do, costs in resources, or if your client is actually aware that DRM can be really annoying. You would have to answer that. You can try to educate the client, but that could be seen as trying to explain away substandard work.
If the client is not happy, the next fallback position is to get paid without litigation, and for that to happen, the contract has to be reasonably clear. Unfortunately, "reasonable best effort" isn't clear, so you might wind up in court. You may be able to renegotiate parts of the contract in the client's favor, or you may not.
If all else fails, you hope to win the court case.
I am not a lawyer, and this is not legal advice. I do see this as more of a question of expectations and possible legal interpretation than a technical question. I don't think we can help you here. You should consult with a lawyer who specializes in this sort of thing, and I don't even know what speciality to recommend. If you're in the US, call your local Bar Association and ask for a referral.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
Files being tied to the user requires some method of verifying that there is a user. What happens when your verification server goes down (or is discontinued, as Wal-Mart did)?
There is no level of DRM that doesn't affect at least some "honest users".
Data can be copied
As long as client hardware, standalone, can not distinguish between a "good" and a "bad" copy, you will end up limiting all general copies, and copy mechanisms. Most DRM companies deal with this fact by a telling me how much this technology sets me free. Almost as if people would start to believe when they hear the same thing often enough...
Code can't be protected on the client. Protecting code on the server is a largely solved problem. Protecting code on the client isn't. All current approaches come with stingy restrictions.
Impact works in subtle ways. At the very least, you have the additional cost of implementing client-side-DRM (and all follow-up cost, including the horde of "DMCA"-shouting lawyer gorillas) It is hard to prove that you will offset this cost with the increased revenue.
It's not just about code and crypto. Once you implement client-side DRM, you unleash a chain of events in Marketing, Public Relations and Legal. A long as they don't stop to alienate users, you don't need to bother.
To answer the question "is it reasonable", you have to be clear when you use the word "protect" what you're trying to protect against...
For example, are you trying to:
authorized users from using their downloaded content via your app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
authorized users from using their downloaded content via any app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
unauthorized users from using content received from authorized users via your app?
unauthorized users from using content received from authorized users via any app?
known users from accessing unpurchased/unauthorized content from the media library on your server via your app?
known users from accessing unpurchased/unauthorized content from the media library on your server via any app?
unknown users from accessing the media library on your server via your app?
unknown users from accessing the media library on your server via any app?
etc...
"Any app" in the above can include things like:
other player programs designed to interoperate/cooperate with your site (e.g. for flickr)
programs designed to convert content to other formats, possibly non-DRM formats
hostile programs designed to
From the article you linked, you can start to see some of the possible limitations of applying the DRM client-side...
The third, originally used in PyMusique, a Linux client for the iTunes Store, pretends to be iTunes. It requested songs from Apple's servers and then downloaded the purchased songs without locking them, as iTunes would.
The fourth, used in FairKeys, also pretends to be iTunes; it requests a user's keys from Apple's servers and then uses these keys to unlock existing purchased songs.
Neither of these approaches required breaking the DRM being applied, or even hacking any of the products involved; they could be done simply by passively observing the protocols involved, and then imitating them.
So the question becomes: are you trying to protect against these kinds of attack?
If yes, then client-applied DRM is not reasonable.
If no (for example, you're only concerned about people using your app, like Apple/iTunes does), then it might be.
(repeat this process for every situation you can think of. If the adig nswer is always either "client-applied DRM will protect me" or "I'm not trying to protect against this situation", then using client-applied DRM is resonable.)
Note that for the last four of my examples, while DRM would protect against those situations as a side-effect, it's not the best place to enforce those restrictions. Those kinds of restrictions are best applied on the server in the login/authorization process.
If the server serves the content without protection, it's because the encryption is per-client.
That being said, wireshark will foil your best-laid plans.
Encryption alone is usually just as good as sending a boolean telling you if you're allowed to use the content, since the bypass is usually just changing the input/output to one encryption API call...
You want to use heavy binary obfuscation on the client side if you want the protection to literally hold for more than 5 minutes. Using decryption on the client side, make sure the data cannot be replayed and that the only way to bypass the system is to reverse engineer the entire binary protection scheme. Properly done, this will stop all the kids.
On another note, if this is a product to be run on an operating system, don't use processor specific or operating system specific anomalies such as the Windows PEB/TEB/syscalls and processor bugs, those will only make the program even less portable than DRM already is.
Oh and to answer the question title: No. It's a waste of time and money, and will make your product not work on my hardened Linux system.

What is going to overtake FTP and why aren't we using it yet?

Is it just me, or does FTP seem a little archaic? It seems slow and inefficient, and its over 30 years old, not that all old things are bad :)
What protocols exist out there that might become successors to FTP?
I've used webdav a little, but don't know much about it. Is it faster? More reliable? More secure?
Why isn't there widespread adoption of a newer technology (yet)?
Update: Specifically, I'm referring to downloading/uploading files between developers and their web server.
I am aware of other mainstream protocols for other uses such as web browsing, file sharing, etc.
The nice thing about FTP is that it works, which is a major improvement over, for example, Windows filesharing (or for that matter, Win7's Homegroups).
There are plenty of other technologies for transferring files though. HTTP is commonly used for retrieving files, SCP or SFTP handle the secure aspect, basically running the usual protocol through a SSH tunnel. As for inefficient? How so? Just because it's old doesn't mean it's inefficient.
How would a more efficient protocol work?
Anyway, FTP has its niche. It is used for transferring files where security is not important. It does the trick there, and I'm not aware of any superior alternatives, nor can I think of any obvious ways to improve the protocol.
This was a provocative article: Wish More Hosts Offered WebDAV? Blame PHP!
A useful thing about WebDAV is that it tends to be more firewall-friendly, you don't need to muck around with PASV. Since it can use HTTPS you can obtain better security that way than you get with FTP.
Here's a discussion about the future of FTP and related file transfer protocols that I blogged about recently.
FTP used to be the One True system to move data around. That's pretty much fragmented now:
for public data distribution: HTTP, BitTorrent
for sharing data inside an organization: web-based tools, SMB and other native filesharing platforms
for moving data between boxes: scp, rsync
for sending data to an individual: email, web-based tools
Actually, I find FTP one of the most efficient protocols, as there is only minimal protocol overhead. Also, FTP commands are plain english words, instead of binary commands.
It's main weakness is the lack of encryption, which puts it IMHO into the same category as Telnet, which has been replaced by SSH mostly.
There are replacements (i.e. SCP), but frankly, FTP is a fine protocol and with FTP over SSH, there is an alternative to it's main weakness available. But yes, nowadays I would use SCP whenever possible.

Why don't browsers let you open a regular connection instead of Ajax or Comet?

If you want to open a two-way connection between the browser and server, the only choice is to poll (hammer the server), or use comet (crufty and prone to disconnects).
Why don't browsers just let you open up a plain TCP connection? Is there any practical benefit to not having this ability?
The underlying protocol HTTP is basically a half duplex communication protocol which is stateless as well and does not supports full duplex communication. However, with HTML 5 websockets things are going to change. Websockets is a new standard which is being considered in HTML 5 specification. Once the specifications have been finalized and all the browser vendors have adapted the standards you can possibly use websockets to establish a dedicated TCP connection through browsers itself.
We must also keep in mind that HTTP was basically designed to deliver documents & share information between the geographically distributed teams and it was not intended to be a communication protocol as such.
Having said that, there are already companies which have built some messaging gateways to enable you to implement full duplex communication.
Given that this functionality is effectively available through flash, there's no real security rationale - but these days no browser wants to be the first to implement a non-standard extension like that. Moreover, there's no easy way to do threads, which could make using a socket rather awkward.
Over the years so many aspects or elements of the web have been hijacked in order to deliver richer experiences. Comet is but one example, where long lived connections were exploited in order to allow server side push. Originally web pages were just meant to be hyperlinked documnts of text and not the rich applications we often see today. Hacks and abuses of what the original thought intended will continue, until one day these things become more standardised.
The answer to your question is essentially no, there is no tangible advantage to not being able to open a two-way connection between client and server in a browser. The reason it can't be done is simply that this was not the intention of web browsers, which were developed to poll/retrieve documents. With the advent of Rich Internet Applications, it has become desirable to have such functionality, but previously this had never been the goal of a browser. Currently there is a void to be filled by an eventual protocol or implementation of an existing protocol which will govern two-way communication between a browser and the server. There are existing techniques used to simulate this behavior to different degrees (AJAX, Comet, etc.) or it can be accomplished with embedded objects (Java, Flash, ActiveX Controls in IE) but these are simply paths around the void, not bridges over it.
We will simply have to wait (or act) for the standard to be written and the implementation to follow. More than likely, the implementation will actually come first, and we will have a fistfull of new cross-browser compatibility issues to enjoy :) Oh, the bleeding edge!
Firewalls. Non-HTTP traffic is often blocked by firewalls, so opening up a random TCP port for communication will often fail.

Resources