In this article about Go Web Server, there're Listen Socket and Client Socket in Go,
I can't understand why GoLang need two sockets Listen Socket, Client Socket but not just one socket, can anyone explain its concept or give a metaphor?
EDIT : I update my answer.
Maybe I misunderstanding the graph or the graph isn't draw very well, possibly Listen Socket, Client Socket are same socket, if the socket hasn't accept connection from client, it's called Listen Socket, and after it accept the connection, it's renamed to Client Socket, there's only one socket with different stage and name.
UPDATE 1:
I find a better article and graph about Socket Working Here.
In the grpah of article, it's clear when there's new connection, TCP Server will
create a new socket to handle the connection, and the Listen Socket continuing listening for other connections.
Here's a paragraph in the article:
The first socket created by a TCP server, via NetSock_Open(), is typically designated a listen socket , and, after the call to NetSock_Listen(), remains open indefinitely, to allow the server to respond to various connection requests. Rather than using this socket to exchange data with requesting clients, the server will create a new socket for each request.
UPDATE 2
Since first update is working on Micrium, I find another seems more general TCP working instuction Here:
TCP connection flow The following sequence shows the flow of a TCP
connection:
The server creates the listener socket that is waiting for remote clients to connect.
The client issues the connect() socket function to start the TCP handshake (SYN, SYN/ACK, ACK). The server issues the accept() socket
function to accept the connection request.
The client and server issue the read() and write() socket functions to exchange data over the socket.
Note: There are several SSL APIs that you can use to send and receive data other than the read() and write() socket functions.
Either the server or the client decides to close the socket. This causes the TCP closure sequence (FINs and ACKs) to occur.
The server either closes the listener socket or repeats beginning with step 2 to accept another connection from a remote client.
Note: Normally after the accept() socket function ends, the server divides
into two processes (or threads). The first process handles the
connection with the client and the second process issues the next
accept() socket function. Figure 1 shows an example of a TCP
connection:
Note:
I find another Socker Programming Tutorial mention about working detail in TCP.
And In .NET Framework MSDN, the explanation about Socket.Accept Method() says Accept synchronously extracts the first pending connection request from the connection request queue of the listening socket, and then creates and returns a new Socket.
I have skimmed RFC about TCP before Update 1, but I didn't see it mention the detail that Listen use one socket, and when Accept it'll create another new Socket.
Maybe the thorough way is to research the Source Code about Create Socket and Connection in Go, but I'm not consiedring to do it now.
Related
I'm building a distributed system and I would like asynchronous send and recv from both sides with blocking after high water mark.
PUSH/PULL sockets works great, but I wasn't able to bind a PUSH socket. Meaning I can't have a client-PUSH to server-PULL and a server-PUSH to client-PULL, if the client is behind a firewall, since the server can't connect to the client.
In the book, the following is written, but I can't find an example of it.
"REQ to DEALER: you could in theory do this, but it would break if you added a second REQ because DEALER has no way of sending a reply to the original peer. Thus the REQ socket would get confused, and/or return messages meant for another client." http://zguide.zeromq.org/php:chapter3
I only need a one-to-one connection, so this would in theory work for me.
My question is, what is the best practice to obtain asynchronous send and recv with ZeroMQ without dropping packets?
Most ZeroMQ sockets can both bind (listen on a specific port, acting as a server) and connect (acting as a client). It is usually not related to the data flow. See the guide for more info.
Try to bind on your servers PUSH socket and connect from your clients PULL socket.
I am writing a websocket server in Go that broadcasts messages to clients. I use SetWriteDeadline on each send so that the broadcast loop doesn't get stuck.
My question is: how do I interpret an error from SetWriteDeadline? In particular, should I assume that there is something wrong with that particular client and unregister it? Or is it a server-side issue that happened to get triggered on this client?
After researching SetWriteDeadline, I found that the deadline is for putting the message on the TCP stack server-side, not for the client to receive the message. So perhaps a better way to phrase my question is this: is there a separate TCP stack for each websocket client (perhaps this is the thing that has size WriteBufferSize), or is this buffer shared between clients? In the former case it seems like I should unregister the client on a SetWriteDeadline error, but not in the latter case.
Websocket connections are independent of other websocket connections.
Websocket connections have an underlying network connection. These network connections are also independent of each other.
An error returned from SetWriteDeadline indicates a problem with that specific websocket connection or the websocket connection's underlying network connection.
Also note that Gorilla's SetWriteDeadline method never returns an error.
My program is similar to a HTTP proxy, it waits for messages on an interface and it forwards them to another interface. The application uses only IOCP, both client and server sides. Sometimes, the client is slower (in a ratio of 10 or 100) than the server, which can not buffering too much data.
How can I suspend an established TCP connection then resume it without losing any message? I tried to delay the post of a new recv IOCP event, but some messages are lost.
C++/Windows 7+
I tried to delay the post of a new recv IOCP event
That should in fact do the trick. Your server side TCP connection's receive buffer would then fill up to the receive buffer size as set on your socket, at which point the socket will push back to the sending side of that socket, which - by standard means of TCP flow control - simply stops sending more packets until the receiver signals having processed more messages.
Now it depends on the sending side how long it wants to wait (timeout), before disconnecting.
"some messages are lost"
That can only happen with TCP if you get disconnected, which both the sender and the receiver will take notice of. So data isn't simply lost. It depends on your network protocol on top of TCP though whether the sender application can know how much of the messages the receiving application (your proxy, in this case) successfully processed.
I am trying to use the ZeroMQ rep/req and cannot figure out how to handle server side errors. Look at the code from here:
socket.bind("tcp://*:%s" % port)
while True:
# Wait for next request from client
message = socket.recv()
print "Received request: ", message
time.sleep (1)
socket.send("World from %s" % port)
My problem is what happens if the client calls socket.send() and then hangs or crashes. Wouldn't the server just get stuck on socket.send() or socket.recv() forever?
Note that it is not a problem with TCP sockets. With TCP sockets I can simply break the connection. With ZMQ, the connections are implicitly managed for me and I don't know if it is possible to break a 'session' or 'connection' and start over.
You can terminate ZMQ sockets much the same way you terminate TCP sockets.
socket.close()
If you need to wait on a message but only up for a finite amount of time you can pass a timeout flag to socket.recv(timeout=1024) and then handle the timeout error case the same way you would when a TCP socket timeouts or disconnects. If you need to manage several sockets all of which may be in an error state then the Poller class will let you accomplish this.
The ZMQ Z-guide offers lots of good hints on how to structure your services to handle different scenarios.
I think chapter 4 can be of interest to you, especially the Lazy Pirate Pattern.
Check out the examples of Lazy Pirate Server and Lazy Pirate Client.
In general,
Make sure you setsockopt() on the socket such that send and recv will not block forever. (Temporary blocking – be it client or server – is OK, but infinite blocking is bad because your application cannot do anything else)
In the event that any of the I/O got an error,
If you are the client, close() the current socket and re-create a new one to establish a new connection
If you are the server, there's nothing else to do, you simply are waiting for a new connection. You will want to explore the Poller class.
If I am not wrong, to have a push technology the client ( say browser ) also needs to run a small web server which is listening on some port ( say ijetty runs on 8080 ). Now when the actual server comes to know about any event, it sends the event to client. This way there is no PULL mechanism involved at all. Is this right ? OR there is a persistent connection involved and server sends the data on that connection whenever the event happens. My question is : in the former case ( if it is true ), how does server know about client's IP ?
WebSockets working with socket based on TCP connection, basically the client make a request for connection to the server with a challenge, websocket version, ip and more data, then the server decrypts the challenge and return his result back to the client, this process called Handshake.
If the handshake is approved, the connection is made, the socket connection remains open between the client and the server, heartbeats will be sent from the server to the client like a ping to check if the connection is still open.
read this wiki to find out more:
http://en.wikipedia.org/wiki/WebSocket