As I understand it, active and passive mode in FTP changes the port on which commands and data are sent from the client to the server which can be useful where firewalls are concerned. I think I'm also right in saying that SFTP doesn't have the same concept - but I'm not clear what nuances of the SFTP protocol make it unnecessary/undesirable to mimic that same pattern that exists in FTP.
Active/passive mode distinction in FTP protocol is needed, because in FTP, there's a separate transfer channel/connection for file transfers. And in different network setups, a different mode might be needed (though nowadays, mostly passive mode it used).
It's not useful where firewalls are concerned, it's a problem where firewalls are concerned. This concept of a separate connection on a separate port was probably not a good idea, as I do not think that this model was ever repeated again in any other similar protocol. Wikipedia FTP article mentions that FTP was designed this way because originally it was not intended to operate over TCP/IP (FTP originated in 1971).
In SFTP, there's nothing like that. All happens within one connection. So there are no problems "where firewalls are concerned".
Related
we are replacing websocket instead of ssh-tunnel. May i know how websocket is better than ssh-tunnel?
WebSocket is a protocol designed for 2-way real-time communication between browsers and servers to replace hacky solutions like long polling and XHR streaming.
SSH is a protocol designed for operating network services securely over an insecure network. Usually it's used for remote logins, file transfers, however it can be used for any protocol, however a few modifications need to be made.
The difference between them is, well, WebSocket is designed to be used for the browser and has support there. However, SSH is a more general protocol and can be used for more however it is not supported by browsers directly, but through proxies which bridge WebSocket to SSH.
There is nothing inherently "better" about WebSocket against SSH. It just depends on your use case, if you want to make a remote terminal or something for your sysadmins, use SSH. If you want to use it for, say, a real time chat in the browser, use WebSocket.
Is there any protocol, API or software in existence that can send data/IM/etc directly from one device to another with no server?
Can you not use HTTP GET/POST/DELETE directly between two devices when their device data is known to the user(s)?
I would very much like to know if there is ANY software/protocols that can do this.
thank you!
The internet is build on the Internet Protocol suite. This suite has 5 different layers of protocols: The physical layer, the link layer, the network layer, the transport layer and the application layer. Each depends on the one before.
If you just use the browser, by default HTTP (application layer) is used, which relies on TCP (transport layer), which relies on IP (v4 or v6, network layer), which relies on ethernet (link layer), which finally relies on the actual cable that's plugged into your computer (for WiFi, the first three are the same but the last two differ if I'm not mistaken).
Now to your question: Is there any protocol, API or software in existence that can send data/IM/etc directly from one device to another with no server?
Yes there is. I suggest you start looking at protocols that are in the application layer. To highlight a few standards next to HTTP(S): FTP is for file transfer, IMAP is for emails clients, SMTP is for email servers and SSH is a secure shell which can also be used to tunnel data through.
For your specific case, I think either FTP (FTPS if you want it over SSL), or SSH can be a solution, but it's hard to know for sure without the specifics.
The only thing that these protocols have in common is that one of the two computers will act like server and the other computer as client. This has as downside that port-forwarding might be necessary.
If you've chosen the protocol you'd like to use, then you're up for the next step, selecting a program that can do that for you. For HTTP(S), I'd recommend Apache. If you're using Linux and chose SSH, then you're in luck, there is a build in SSH server in Linux, you can use that. For other protocols, you might just want to search yourself, as I do not have any suggestions.
I hope this answers most of your questions!
Desirius
In browser context, WebRTC is probably what you are looking for: It allows user to user communications.
https://en.wikipedia.org/wiki/WebRTC
https://webrtc.org/
I have setup a windows 2003 ftp server and using chilkat to connect to this ftp inside my customized application. My application is developed in VB6 with ftp support of chilkat. The application works on different places of the city and connects to my ftp. Unable to access ftp and transfer files using the customised application, from some networks like idea netsetter / bsnl. It works perfect on other networks.
Thanks in advance.
Regards,
Sam
This is likely to be a firewall issue at the client end. FTP is often blocked by firewalls.
Just as well, FTP has its problems making it a less than ideal alternative. There are better options such as SFTP or FTPS but support for those is limited in Windows and you'll have to buy both server and client pieces to use one of them.
Fewer firewalls block HTTP and HTTPS though some are finicky enough to block traffic that doesn't look like Web browsing. Stiil, your odds of success go up substantially.
An obvious choice might be to use WebDAV. IIS supports WebDAV and it is pretty easy to write simple WebDAV client logic in VB6 based on one of the many HTTP components available. I'd probably use XmlHttpRequest or WinHttpRequest for that. A search ought to turn up several VB6 classes written to wrap one of them to support WebDAV client operations. You can also buy WebDAV client libraries.
Stick to using HTTPS (which means you need a server cetificate for IIS) and you won't have passwords going over the network in the clear. Even if you use HTTP you'll be no worse off than using FTP, plus it'll work through the vast majority of firewalls except those that specifically block non-browsing HTTP requests.
This could be a firewall configuration on the Client or Server. You're not going to be able to do much about the client, but for the server it may depend on whether your doing Active or Passive FTP connections.
If you are doing Active connections, make sure ports 20 and 21 are open.
If you're doing Passive connections, you may want to check out this article about configuring the PassivePortRange in Server 2003 FTP- http://support.microsoft.com/?id=555022.
I use FTP on a daily basis to work on multiple websites, but when I try to work from home, my darned satellite internet has a latency of about 1000ms. (Its craptastic service, I know, but there are no alternatives where I live.) Thus, I was wondering if there is a way that I can connect to my web server and transfer files that can accomodate this latency.
FTP "works", but it communicates very very slowly, and its a nightmare with multiple files. It takes the connection about 10-15 seconds to start the transfer, and another 5 seconds after the transfer is done. The transfer itself goes very fast as expected, but the handshake process does not, as the server/client seem to need to do a lot of communication to negotiate the transfer. Worse, it seems to need to do this handshake thing for every individual file, which certainly doesn't help.
Is there any way I can modify my FTP to make it work better over a high latency connection? If not, are there any other protocols or transfer services I might be able to use that could handle such an issue? Its the main fault I find with my ISP, and there's not a lot I've been able to find that I can do about it...
Thanks
Sounds like a good case for using UDP rather than TCP-based protocols - e.g. uftp
A quote from the linked site: "especially useful for data distribution over a satellite link (with two way communication), where the inherent delay makes any TCP based communication terribly inefficient".
A few options:
Sneaker-net. Use a USB key.
SCP. I'm almost positive it'll only authenticate/handshake once.
Tunnelling over SSH. The poor man's VPN. You'll be able to tunnel FTP or anything you like over the SSH connection. It'll be as fast as you're going to get and is very secure to boot.
Which is generally considered "best practice" when wanting to securely transmit flat files over the wire? Asymmetric encryption seems to be a pain in that you have to manage keysets at endpoints and make sure that the same algorithm is used by all clients, where as SFTP seems to be a pain because of NAT issues with encrypting the control channel, thus the router cannot translate IP. Is there a third-party solution that is highly recommended?
I believe you're talking about FTP with SSL when you say SFTP, and not the SFTP protocol that goes along with SSH. Use SFTP (the SSH version) as it doesn't require an encrypted control channel and will work fine over NAT. The SFTP page I linked to lists a number of graphical SFTP clients at the bottom of the page.
rsync is the best file transferring utility out there. Supports resume, recursion and a variety of encryption including ssh (the default). Like scp on steroids.
If you have multiple routers to punch through you can build ssh tunnels. It will only transfer parts of the file that are missing which make it great for backups. It has so many useful features I use it instead of cp for local copying.
It's available for many platforms and included by default on modern *nix systems. More at http://samba.anu.edu.au/rsync/
Use PGP / GPG and transfer the gpg-ed file directly via ftp or any other method.
Yah, I meant SSL FTP, not SFTP. "Management" is adverse to open-source, but if that's what the de-facto best practice is, then that's what to use...thanks for answers
With FTPS, you can generally switch to an unencrypted control channel via the CCC command after authentication. This approach means no problems with routers, while the data you are transferring will remain encrypted.