does usual ftp server need two threads for data connection and control conection? - ftp

I'm trying to write standard FTP server.
I wonder whether this scenario is correct or not?
1. On each request of clients, a thread manager makes thread for control connection.
2. When control connection thread receives PORT command, it establishes data connection(active open)
Is this usual solution? I wonder this since I have to create standard FTP server.
I would be happy if you answered JUST 'yes' or 'no'.
Thank you in advance.

Yes, FTP uses two connections, read the RFC http://www.ietf.org/rfc/rfc959.txt, the wikipedia article is a bit friendlier http://en.wikipedia.org/wiki/File_Transfer_Protocol but the RFC is the bible.
As far as threads go you will need a thread to listen for incoming connections, a thread to process the control connection and a thread to process the data connection. You could do it all with one thread by using asynchronous i/o using select.

Related

How to make http2 requests with persistent connection ? (Any language)

How connect to https://api.push.apple.com using http2 with persistent connection ?
Persistent connection is to avoid rapid connection and disconnection:
APNs treats rapid connection and disconnection as a denial-of-service attack
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html
Is writing a client in c using https://nghttp2.org the only solution?
(If that question should be ask in another StackExchange website, please do tell me)
Non-persistent connections are a relic of the past. They were used in HTTP/1.0, but HTTP/1.1 already moved to a model where the connections were persistent by default, and HTTP/2 (also being multiplexed) continues on that model of connections being persistent by default.
Independently on the language you are using to develop your applications, any HTTP/2 compliant client will, by default, use persistent connections.
You only need to use the HTTP/2 client library in a way that you don't explicitly close the connection after every request you make.
Typically these libraries employ a connection pool that keeps the connections open, typically until an idle timeout fires.
When your application makes HTTP requests, the library will pick an open connection and send the request. When the response arrives the library will not close the connection but instead put it back into the pool for the next usage.
Just study how the library you want to use allows you to make multiple requests without closing the connection.
I also met this question!
If the connection be idle for a long time (about 1 hour), then function poll catches no socket status changed. It always returns 0 even as on_frame_send_callback was invoked.
Is there anyone can figure out the problem?

Combining pub/sub with req/rep in zeromq

How can a client both subscribe and listen to replies with zeromq?
That is, on the client side I'd like to run a loop which only receives messages and selectively sends requests, and on the server side I'd like to publish most of the time, but to sometimes receive requests as well.
It looks like I'll have to have two different sockets - one for each mode of communication. Is it possible to avoid that and on the server side receive "request notifications" from the socket on a zeromq callback thread while pushing messages to the socket in my own thread?
I am awfully new to ZeroMQ, so I'm not sure if what you want is considered best-practice or not. However, a solution using multiple sockets is pretty simple using zmq_poll.
The basic idea would be to have both client and server:
open a socket for pub/sub
open a socket for req/rep
multiplex sends and receives between the two sockets in a loop using zmq_poll in an infinite loop
process req/rep and pub/sub events within the loop as they occur
Using zmq_poll in this manner with multiple sockets is nice because it avoids threads altogether. The 0MQ guide has a good example here. Note that in that example, they use a timeout of -1 in zmq_poll, which causes it to block until at least one event occurs on any of the multiplexed sockets, but it's pretty common to use a timeout of x milliseconds or something if your loop needs to do some other work as well.
You can use 2 threads to handle the different sockets. The challenge is that if you need to share data between threads, you need to synchronize it in a safe way.
The alternative is to use the ZeroMQ Poller to select the sockets that have new data on them. The process would then use a single loop in the way bjlaub explained.
This could be accomplished using a variation/subset of the Majordomo Protocol. Here's the idea:
Your server will be a router socket, and your clients will be dealer sockets. Upon connecting to the server, the client needs to send some kind of subscription or "hello" message (of your design). The server receives that packet, but (being a router socket) also receives the ID of that client. When the server needs to send something to that client (through your design), it sends it to that ID. The client can send and receive at will, since it is a dealer socket.

Suggestions on keeping connections alive with FTP file listing via AJAX?

I have a multi-user Ruby on Rails web application that can interact with an FTP server via AJAX. The application allows the user to browse an FTP site. Javascript makes an AJAX call which communicates with a server-script that returns a list of files and directories within a given directory.
This works fine. However, each time a directory listing is requested, the server must re-establish a connection with the FTP server, which takes a lot of time. I'm looking for a way to leave the FTP connection open for until some number of timeout seconds.
I could probably do this using threads (though, I'm completely open to other ideas) or some fancy connection-pooling scheme (perhaps via a daemon that manages this).
What are some ways I could persist and regain reference to connections in my ruby source?
Someone suggested using a "Connection: Keep-Alive" header, but I don't see how that would help in this case.
Not a complete answer, but if you did have some sort of daemon or something managing the connection, you could use TCP keepalives to keep the control connection alive for an extended period of time.
FTP uses two connections. A control connection is established client-to-server, and data connections are established server-to-client for each request. So each directory listing or GET would prompt another data connection to be opened for the duration of the request.
You shouldn't worry about keeping lots of listening sockets open because the data connections are negotiated over the control connection just prior to being established. (Also the data connections could be made client-to-server instead of server-to-client by using passive mode if you want, but it's still a separate connection.)
Either way, I think the source of sluggishness is more to do with closing and reopening the control connection (and authenticating) for each request. I think if you have some process that keeps the control connection open using TCP keepalives (SO_KEEPALIVE socket option), you'll see a big improvement.

WinSock best accept() practices

Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages:
A > Accept the connection attempt right away but queue the client until its turn?
B > Do not accept the next connection attempt until we are done serving the currently connected client?
What do you guys think is the most efficient?
Here I describe the cons that I'm aware for both options. Hopefully this might help you decide.
A)
Upon a new client connection, it could send tons of data making your receive buffer become full, which causes unnecessary packets to be transmitted (see this). If you don't plan to receive any data from the client, shutdown receiving on that socket, thus if the client sends any data after that, the connection is reset. Moreover, if your protocol has strict rules, disconnect the client.
If the connection stays idle for too long, the system might disconnect it. To solve this, use setsockopt to set SO_KEEPALIVE on each client socket.
B)
If you don't accept the connection after a certain period (I guess the default is 60 seconds), it will timeout. In a normal (or most common) situation this indicates the server is overloaded, thus unable to answer in time. However, if the client is also designed by you, make the socket non-blocking, try to connect, then manage the timeout as you wish.
Ask yourself: what do you want the user experience to be at the other end? Do you want them to be stuck? Do you want them to time out? Do you want them to get a polite message?

Windows sockets

Does windows closes idle sockets? I mean if I'm a client connecting to a service on Windows server, is it possible that Windows server closes the connection after certain idle-time?
If yes, how to change this behaviour, or at least change the idle-time value?
Thanks in advance.
The behavior depends on the SO_KEEPALIVE socket option. Here's the MSDN page about it. After some further digging you can adjust the KEEPALIVE semantics with a WSAIoctl and the SIO_KEEPALIVE_VALS Control Code.
I find, that if you control both ends, it is usually better to implement keepalive messages as part of the protocol used to communicate between the client and server instead of relying on SO_KEEPALIVE.
I would imagine that depends on the protocol, implementation and perhaps server config.
PuTTY has an option to prevent timing out of SSH sessions by periodically sending null packets to the server - I suspect this is a simply packet with no actual payload. Perhaps you could implement something like this?
On MSDN you will find that the defaul keep-alive timeout is set to 2 hours. You are maybe able to manipulate this by using setsockopt/SO_SNDTIMEO/getsockopt

Resources