Client to server connection only sending not receiving - client-server

This is my case, I have a server listening for connections, and a client that I'm programming now. The client has nothing to receive from the server, yet it has to be sending status updates every 3 minutes.
I have the following at the moment:
WSAStartup(0x101,&ws);
sock = socket(AF_INET,SOCK_STREAM,0);
sa.sin_family = AF_INET;
sa.sin_port = htons(PORT_NET);
sa.sin_addr.s_addr = inet_addr("127.0.0.1");
connect(sock,(SOCKADDR*)&sa,sizeof(sa));
send(sock,(const char*)buffer,128,NULL);
How should my approach be? Can I avoid looping recv?

That's rather dependant on what behaviour you want and your program structure.
By default a socket will block on any read or write operations, which means that if your try and have your server's main thread poll the connection, you're going to end up with it 'freezing' for 3 minutes or until the client closes the connection.
The absolute simplest functional solution (no multithreadding) is to set the socket to non-blocking, and poll in in the main thread. It sounds like you want to avoid doing that though.
The most obvious way around that is to make a dedicated thread for every connection, plus the main listener socket. Your server listens for incoming connections and spawns a thread for each stream socket it creates. Then each connection thread blocks on it's socket until it receives data, and either handles it itself or shunts it onto a shared queue.
That's a bulky and complex solution - multiple threads which need opening and closing, shared resources which need protecting.
Another option is to set the socket to non-blocking (Under win32 use setsockopt so set a timeout, under *nix pass it the O_NONBLOCK flag). That way it will return control if there's no data available to read. However that means you need to poll the socket at reasonable intervals ("reasonable" being entirely up to you, and how quickly you need the server to act on new data.)
Personally, for the lightweight use you're describing I'd use a combination of the above: A single dedicated thread which polls a socket (or an array of nonblocking sockets) every few seconds, sleeping in between, and simply pushed the data onto a queue for the main thread to act upon during it's main loop.
There are a lot of ways to get into a mess with asynchronous programs, so it's probably best to keep it simple and get it working, until you're comfortable with the control flow.

Related

Boost synchronous Client and Server - infinite loop blocking the rest

i'm using a synch server and client that reads in an infinite loop.
for (;;){
boost::system::error_code error;
read(socket,boost::asio::buffer(&abc, sizeof(abc)));
...
}
what would be the best way to resolve the blocking of the rest of the program since i would like to use snych not asynch. (thread?)
thx in advance..
make it a thread-per-connection with a thread safe queue to serve as a main thread's inbox. It does not scale for multiple connections, but it is the safest and easiest thing to do when you do not need scalability. Writes will be done by the main thread directly into the socket. Reads will be done by the dedicated thread, it will be accumulating read data bits in a temporary buffer till it has received the whole message and then it would forward the full message into main thread's inbox queue.
If you need to serve many connections or are limited in resources, then you need to use non-blocking IO with select()/epoll() based event loop (aka reactor).

Listening to multiple sockets: select vs. multi-threading

A server needs to listen to incoming data from several sockets (10-20). After some initializations, those sockets are created and do not change (i.e. no new sockets accepted, and none of them is expected to close during the lifetime of the server).
One option is to select() on all sockets, then deal with incoming data per socket (i.e. route to proper handling function).
Another option is to open one thread per socket and let each thread recv() and handle the input.
(The first option has the benefit of setting a timeout, but this is not an issue in this case,
since all the sockets are quite active).
Assuming the following: Windows server, has enough memory such that 20MB (for the 20 threads) is a non-issue, is any of those options expected to be faster then the other?
There's not much in it in you app. Typically, using a thread-per-socket is easier than asynchronous approaches because it's a simpler overall structure and it's easier to maintain state.

Multiple Socket client connecting to a server

I am designing an simulator application where the application launches multiple socket connection(around 1000 connections) to a server. I don't want to launch as many as threads to handle those connections, since the system cant handle that much clients. Using Select doesnt make sense, since i need to loop through 1000 connections which may be slow. Please suggest me how to handle this scenario.
You want to be using asynchronous I/O with an I/O Completion Port (IOCP).
It's too much to explain shortly, but any Windows application that needs to support a large number of concurrent sockets should be using an IOCP.
An IOCP is essentially an Windows-provided thread safe work queue. You queue a 'completion packet' to an IOCP and then another thread dequeues it and does work with it.
You can also associate many types of handles that support overlapped operations, such as sockets, to an IOCP. When you associate a handle with an IOCP, overlapped operations such as WSARecv will automatically post a completion packet to the associated IOCP.
So, essentially, you could have one thread handling all 1000 connections. Each socket will be created as an overlapped socket and then associated with your IOCP. You can then call WSARecv on all 1000 sockets and wait for a completion packet to become available. When data is received, the operating system will post a completion packet to the associated IOCP. This will contain relevant information, such as how much data was read and the buffer containing the data.
Looping through 1000 handles is still significantly faster than sending 1000 packets, so I wouldn't worry about performance here. select() is still the way to go.

Is this a scaleable named pipe server implementation?

Looking at this example of named pipes using Overlapped I/O from MSDN, I am wondering how scaleable it is? If you spin up a dozen clients and have them hit the server 10x/sec, with each batch of 10 being immediately after the other, before sleeping for a full second, it seems that eventually some of the instances of the client are starved.
Server implementation:
http://msdn.microsoft.com/en-us/library/aa365603%28VS.85%29.aspx
Client implementation (assuming call is made 10x/sec, and there are a dozen instances).
http://msdn.microsoft.com/en-us/library/aa365603%28VS.85%29.aspx
The fact that the web page points out that:
The pipe server creates a fixed number of pipe instances.
and
Although the example shows simultaneous operations on different pipe instances, it avoids simultaneous operations on a single pipe instance by using the event object in the OVERLAPPED structure. Because the same event object is used for read, write, and connect operations for each instance, there is no way to know which operation's completion caused the event to be set to the signaled state for simultaneous operations using the same pipe instance
you can probably safely assume that it's not as scalable as it could be; it's an API usage example after all; demonstration of functionality is usually the most important design constraint for such code.
If you need 12 clients making 10 connections per second then I'd personally have the server able to handle MORE than just 12 clients to allow for the period when the server is preparing for a new client to connect... Personally I'd switch to using sockets but that's just me (and I'm skewed that way because I've done lots of high performance socket's work and so have all the code)...

How does a non-forking web server work?

Non-forking (aka single-threaded or select()-based) webservers like lighttpd or nginx are
gaining in popularity more and more.
While there is a multitude of documents explaining forking servers (at
various levels of detail), documentation for non-forking servers is sparse.
I am looking for a bird eyes view of how a non-forking web server works.
(Pseudo-)code or a state machine diagram, stripped down to the bare
minimum, would be great.
I am aware of the following resources and found them helpful.
The
World of SELECT()
thttpd
source code
Lighttpd
internal states
However, I am interested in the principles, not implementation details.
Specifically:
Why is this type of server sometimes called non-blocking, when select() essentially blocks?
Processing of a request can take some time. What happens with new requests during this time when there is no specific listener thread or process? Is the request processing somehow interrupted or time sliced?
Edit:
As I understand it, while a request is processed (e.g file read or CGI script run) the
server cannot accept new connections. Wouldn't this mean that such a server could miss a lot
of new connections if a CGI script runs for, let's say, 2 seconds or so?
Basic pseudocode:
setup
while true
select/poll/kqueue
with fd needing action do
read/write fd
if fd was read and well formed request in buffer
service request
other stuff
Though select() & friends block, socket I/O is not blocking. You're only blocked until you have something fun to do.
Processing individual requests normally involved reading a file descriptor from a file (static resource) or process (dynamic resource) and then writing to the socket. This can be done handily without keeping much state.
So service request above typically means opening a file, adding it to the list for select, and noting that stuff read from there goes out to a certain socket. Substitute FastCGI for file when appropriate.
EDIT:
Not sure about the others, but nginx has 2 processes: a master and a worker. The master does the listening and then feeds the accepted connection to the worker for processing.
select() PLUS nonblocking I/O essentially allows you to manage/respond to multiple connections as they come in a single thread (multiplexing), versus having multiple threads/processes handle one socket each. The goal is to minimize the ratio of server footprint to number of connections.
It is efficient because this single thread takes advantage of the high level of active socket connections required to reach saturation (since we can do nonblocking I/O to multiple file descriptors).
The rationale is that it takes very little time to acknowledge bytes are available, interpret them, then decide on the appropriate bytes to put on the output stream. The actual I/O work is handled without blocking this server thread.
This type of server is always waiting for a connection, by blocking on select(). Once it gets one, it handles the connection, then revisits the select() in an infinite loop. In the simplest case, this server thread does NOT block any other time besides when it is setting up the I/O.
If there is a second connection that comes in, it will be handled the next time the server gets to select(). At this point, the first connection could still be receiving, and we can start sending to the second connection, from the very same server thread. This is the goal.
Search for "multiplexing network sockets" for additional resources.
Or try Unix Network Programming by Stevens, Fenner, Rudoff

Resources