boost doc says that io_service may distribute work across threads in an arbitrary fashion, is it means that when i'm using TCP socket i may receive data into disorder? Because my reception handler may be distribute across threads in an arbitrary fashion.
When you schedule an async_read or a read using a boost io_service, you act on a socket. Either through socket->read(...) or read(socket ...). If you look through the documentation, there are some variants that accept a criteria for finishing the read, number of bytes, or matching condition. Using this you could have a connection which gives you say 20 bytes of data and you read it in 10 bytes to one thread and while that thread is processing the data, the next 20 bytes go to another thread. There are a few cases when you may want to do that, but usually you will want each thread to read in an entire packet.
If you want to ensure that only one thread is handling your io from a socket at a time, you can wrap the callbacks in a strand. Here's a fairly generic example of what that would look like.
boost::asio::async_read(socket,
buffer(*responseBuffer),
transfer_all(),
strand.wrap(boost::bind(&YourClass::handleRead,
this, /*or use shared_from_this*/
placeholders::error)));
Related
TL;DR I want to have the functionality where a channel has two extra fields that tell the producer whether it is allowed to send to the channel and if so tell the producer what value the consumer expects. Although I know how to do it with shared memory, I believe that this approach goes against Go's ideology of "Do not communicate by sharing memory; instead, share memory by communicating."
Context:
I wish to have a server S that runs (besides others) three goroutines:
Listener that just receives UDP packets and sends them to the demultplexer.
Demultiplexer that takes network packets and based on some data sends it into one of several channels
Processing task which listens to one specific channel and processes data received on that channel.
To check whether some devices on the network are still alive, the processing task will periodically send out nonces over the network and then wait for k seconds. In those k seconds, other participants of my protocol that received the nonce will send a reply containing (besides other information) the nonce. The demultiplexer will receive the packets from the listener, parse them and send them to the processing_channel. After the k seconds elapsed, the processing task processes the messages pushed onto the processing_channel by the demultiplexer.
I want the demultiplexer to not just blindly send any response (of the correct type) it received onto the the processing_channel, but to instead check whether the processing task is currently even expecting any messages and if so which nonce value it expects. I made this design decision in order to drop unwanted packets a soon as possible.
My approach:
In other languages, I would have a class with the following fields (in pseudocode):
class ActivatedChannel{
boolean flag_expecting_nonce;
int expected_nonce;
LinkedList chan;
}
The demultiplexer would then upon receiving a packet of the correct type simply acquire the lock for the ActivatedChannel processing_channel object, check whether the flag is set and the nonce matches, and if so add the message to the LinkedList chan!
Problem:
This approach makes use of locks and shared memory, which does not align with Golang's "Do not communicate by sharing memory; instead, share memory by communicating" mantra. Hence, I would like to know... :
... whether my approach is "bad" regarding Go in the sense that it relies on shared memory.
... how to achieve the outlined result in a more Go-like way.
Yes, the approach described by you doesn't align with Golang's Idiomatic way of implementation. And you have rightly pointed out that in the above approach you are communicating by sharing memory.
To achieve this in Go's Idiomatic way, one of the approaches could be that your Demultiplexer "remembers" all the processing_channels that are expecting nonce and the corresponding type of the nonce. Whenever a processing_channels is ready to receive a reply, it sends a signal to the Demultiplexe saying that it is expecting a reply.
Since Demultiplexer is at the center of all the communication it can maintain a mapping between a processing_channel & the corresponding nonce it expects. It can also maintain a "registry" of all the processing_channels which are expecting a reply.
In this approach, we are Sharing memory by communicating
For communicating that a processing_channel is expecting a reply, the following struct can be used:
type ChannelState struct {
ChannelId string // unique identifier for processing channel
IsExpectingNonce bool
ExpectedNonce int
}
In this approach, there is no lock used.
We have a server that needs 1 UDP connection for each gameplay area, and these each run on their own thread.
We are using C++.
We are non-blocking sockets with recvfrom. The first thing checked in the "read" function is if the recvfrom "in" buffer contains NULL after calling, and then if the error is WSAEWOULDBLOCK.
If the error is found, the function returns and the thread is put to sleep for 1ms (but really, it's longer).
If there is data, it is processed. Some paths lead to immediate processing but most cases the data is put into a queue for the game area's main thread to handle.
My question: Is there a more efficient and performing method than using thread.sleep(1) to ensure each gameplay area's UDP Server instance does not spin while there is nothing to receive, and yet be able to respond to packets faster than the inherent and random thread wake-up of the Scheduler?
In this last part of the requirement, I'm referring to the fact that a thread will usually never sleep only 1ms, rather, on average more like 50ms.
The case may arise, later when the server is being sent requests at a constant rate, that the loop to check and respond to packets is never empty, and so the thread.sleep(1) will never be reached, so I suppose this is more a Best Practice type of question, but I would implement a better solution if one is available.
Thank you
Edit- added info. After adding this, perhaps this implementation isn't anything to worry about. I think worst case scenario is a set of packets would have to wait the 45-55ms for the thread to be scheduled should they miss the opportunity to be read by the socket.
I suppose to improve, I could make the recvfrom call it's own thread, make the socket block, and use a conditional variable to awaken the thread responsible for processing the packets. What do you think about this idea? Too much overhead?
A server needs to listen to incoming data from several sockets (10-20). After some initializations, those sockets are created and do not change (i.e. no new sockets accepted, and none of them is expected to close during the lifetime of the server).
One option is to select() on all sockets, then deal with incoming data per socket (i.e. route to proper handling function).
Another option is to open one thread per socket and let each thread recv() and handle the input.
(The first option has the benefit of setting a timeout, but this is not an issue in this case,
since all the sockets are quite active).
Assuming the following: Windows server, has enough memory such that 20MB (for the 20 threads) is a non-issue, is any of those options expected to be faster then the other?
There's not much in it in you app. Typically, using a thread-per-socket is easier than asynchronous approaches because it's a simpler overall structure and it's easier to maintain state.
I am designing an simulator application where the application launches multiple socket connection(around 1000 connections) to a server. I don't want to launch as many as threads to handle those connections, since the system cant handle that much clients. Using Select doesnt make sense, since i need to loop through 1000 connections which may be slow. Please suggest me how to handle this scenario.
You want to be using asynchronous I/O with an I/O Completion Port (IOCP).
It's too much to explain shortly, but any Windows application that needs to support a large number of concurrent sockets should be using an IOCP.
An IOCP is essentially an Windows-provided thread safe work queue. You queue a 'completion packet' to an IOCP and then another thread dequeues it and does work with it.
You can also associate many types of handles that support overlapped operations, such as sockets, to an IOCP. When you associate a handle with an IOCP, overlapped operations such as WSARecv will automatically post a completion packet to the associated IOCP.
So, essentially, you could have one thread handling all 1000 connections. Each socket will be created as an overlapped socket and then associated with your IOCP. You can then call WSARecv on all 1000 sockets and wait for a completion packet to become available. When data is received, the operating system will post a completion packet to the associated IOCP. This will contain relevant information, such as how much data was read and the buffer containing the data.
Looping through 1000 handles is still significantly faster than sending 1000 packets, so I wouldn't worry about performance here. select() is still the way to go.
Non-forking (aka single-threaded or select()-based) webservers like lighttpd or nginx are
gaining in popularity more and more.
While there is a multitude of documents explaining forking servers (at
various levels of detail), documentation for non-forking servers is sparse.
I am looking for a bird eyes view of how a non-forking web server works.
(Pseudo-)code or a state machine diagram, stripped down to the bare
minimum, would be great.
I am aware of the following resources and found them helpful.
The
World of SELECT()
thttpd
source code
Lighttpd
internal states
However, I am interested in the principles, not implementation details.
Specifically:
Why is this type of server sometimes called non-blocking, when select() essentially blocks?
Processing of a request can take some time. What happens with new requests during this time when there is no specific listener thread or process? Is the request processing somehow interrupted or time sliced?
Edit:
As I understand it, while a request is processed (e.g file read or CGI script run) the
server cannot accept new connections. Wouldn't this mean that such a server could miss a lot
of new connections if a CGI script runs for, let's say, 2 seconds or so?
Basic pseudocode:
setup
while true
select/poll/kqueue
with fd needing action do
read/write fd
if fd was read and well formed request in buffer
service request
other stuff
Though select() & friends block, socket I/O is not blocking. You're only blocked until you have something fun to do.
Processing individual requests normally involved reading a file descriptor from a file (static resource) or process (dynamic resource) and then writing to the socket. This can be done handily without keeping much state.
So service request above typically means opening a file, adding it to the list for select, and noting that stuff read from there goes out to a certain socket. Substitute FastCGI for file when appropriate.
EDIT:
Not sure about the others, but nginx has 2 processes: a master and a worker. The master does the listening and then feeds the accepted connection to the worker for processing.
select() PLUS nonblocking I/O essentially allows you to manage/respond to multiple connections as they come in a single thread (multiplexing), versus having multiple threads/processes handle one socket each. The goal is to minimize the ratio of server footprint to number of connections.
It is efficient because this single thread takes advantage of the high level of active socket connections required to reach saturation (since we can do nonblocking I/O to multiple file descriptors).
The rationale is that it takes very little time to acknowledge bytes are available, interpret them, then decide on the appropriate bytes to put on the output stream. The actual I/O work is handled without blocking this server thread.
This type of server is always waiting for a connection, by blocking on select(). Once it gets one, it handles the connection, then revisits the select() in an infinite loop. In the simplest case, this server thread does NOT block any other time besides when it is setting up the I/O.
If there is a second connection that comes in, it will be handled the next time the server gets to select(). At this point, the first connection could still be receiving, and we can start sending to the second connection, from the very same server thread. This is the goal.
Search for "multiplexing network sockets" for additional resources.
Or try Unix Network Programming by Stevens, Fenner, Rudoff