A tcp server is established in Esp32(station mode).
I accept the clients with the code below.
int sock = accept(listen_sock, (struct sockaddr *)&source_addr, &addr_len);
if (sock < 0) {
ESP_LOGE(TAG, "Unable to accept connection: errno %d", errno);
break;
}
When 20 tcp clients send requests to connect at the same time, 9 can connect. But in my system, 1000 clients on the field will have to connect to the esp32 server at the same time.
Although I made "Maximum active TCP Connections" and "Maximum listening TCP Connections" to 1000 in Menuconfig(LWIP->TCP), the number of connections did not change.
Only when I changed the "Max number of open sockets" in Menuconfig, I was able to increase the number of connections.
Esp32 will be connected to the network in "station" mode and create tcp server. The other 1000 esp32 will connect to this as client.
Is it possible?How should I set tcp server, if possible?
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have a Golang TCP server, i.e. net.TCPConn, connected on a port which, in addition to a TCP stream, also has to receive UDP packets and respond with UDP packets. The incoming UDP packet pops out at the server (from a net.TCPConn.Read()) but I can't figure out how to send a UDP packet back again. All of the UDP write methods apply only to net.UDPConn. net.UDPConn.WriteMsgUDP() tantalisingly talks of whether it is applied to a connected or a non-connected socket, but I can't figure out how to derive net.UDPConn from net.TCPConn; I've tried casting net.TCPConn to net.UDPConn but that causes a panic.
What is the correct way to do this?
FYI, I do have a UDP listener open on the same port ('cos the client at the other end can chose to operate in completely connectionless mode) but since, when the socket is connected, the UDP packet arrives at the TCP server rather than the UDP server, I'd like to send the UDP response back down the same hole, rather than having to mix the two up in some unholy manner. Or is unholiness the answer?
EDIT: a word on the system design here: the purpose of this UDP packet is to test the connection on this socket (the server simply echoes it back). The socket is a [hopefully] established SSH port-forwarding tunnel, hence I don't want to use another socket as this wouldn't test what I'm trying to test (i.e. that both the socket and the SSH tunnel are open; it is a shortcoming of SSH port-forwarding tunnels that, since the application makes a connection to localhost, the socket will report connected immediately, even if the server isn't actually connected at the time). The SSH tunnel otherwise carries a stream of TCP traffic and I specifically want to use UDP for this as I don't want my UDP connection test to be stuck behind the queue of TCP traffic; timing is important in this application and the UDP packet carries timestamps to measure it. Sending a UDP packet on a connected socket is a valid sockets operation, Go must have a way to do it...?
If you just want to send a UDP packet to a "client" that first reaches your application via TCP, what probably you could is to get the remote address:
addr = net.TCPConn.RemoteAddr()
Then assuming this client is also a server and listening on port UDP in 1234
ServerAddr, err := net.ResolveUDPAddr("udp", fmt.Sprintf("%s:1234", addr))
Then you could just write back by doing:
conn, err := net.DialUDP("udp", nil, ServerAddr)
if err != nil {
log.Fatal(err)
}
defer close(conn)
buf := []byte("ping")
_, err = conn.Write(buf)
Don't know if this is exactly what you want but hope can give you some more ideas.
After a discussion on the Golang forums all has become clear.
#JimB is quite right that you cannot send a UDP packet on a TCP port. The reason I thought this was possible is that the definition of sendto() says:
If sendto() is used on a connection-mode (SOCK_STREAM,
SOCK_SEQPACKET) socket, the arguments dest_addr and addrlen are
ignored (and the error EISCONN may be returned when they are not
NULL and 0), and the error ENOTCONN is returned when the socket
was not actually connected.
...and when I called sendto() on my TCP connected port the data I sent did indeed turn up at my Golang net.TCPConn end-point. However, what is happening in this scenario is that, under the hood, sendto() has effectively become an alias for send() and, despite calling sendto(), the data sent is actually being transported in a TCP packet and not a UDP packet at all. This was proved by using netcat -u $host $port to send UDP traffic to the server and netcat $host $port to send TCP traffic to the server: the former did not produce any data at the net.TCPConn end-point while the latter did.
The correct way to do this is for the client to open a TCP and a UDP socket to the server on the same port at the same time while the server, likewise, listens simultaneously for TCP and UDP "connections" (the UDP connection is not a connection at all of course, it is simply an association with a local port number) on the same port. The TCP server then handles my stream and the UDP server handles my UDP test/measurement packets, both down the same SSH port forwarding tunnel.
Server:
s = TCPServer.open(6000)
loop do
Thread.start(s.accept) do |client|
# Keep receive and handle message from client
...
end
end
Clients:
server = TCPSocket.open(server_ip, 6000)
... # Send message if event, will keep TCP connection
Question:
Sometimes network down or client crash, How does sever know the TCP connection is alive? Is there a method or command the verify the connection?
Thanks
The most reliable way to verify the state of a TCP connection is to send an empty packet to the server and check if you get a response or an error. That will give you the current connection state of the socket.
Ive searched internet but didnt got the answer can any1 explain me the difference between them
A TCP "connection" is a 4-tuple. Local IP, Local Port, Remote IP, and Remote Port. Each end maintains this identification within its TCP stack, with the senses reversed (Local vs. Remote).
The combination of these 4 values must be unique. This explains the problems people often have writing a TCP client that reuses a socket to reconnect to the same server.
A "closed" connection leaves this ID in the tables at each end for some time, in TIME_WAIT state. This is an artifact of a TCP mechansim that deals with maintaining connection integrity even if the physical layer connection breaks, keeps pending packets from being recevied by a second connection, etc. TIME_WAIT can last up to 4 minutes.
Unless the client resets its socket's LocalPort to 0 (which is a request for an automatic ephemeral port assignment) it can fail if it tries to reconnect before TIME_WAIT expires. Since this is 0 for a newly created socket, programmers often overlook this requirement prior to calling Connect.
LocalPort isn't just an issue for listening sockets.
A server listens on a localport, while a client sends data from the localport.
The client remoteport should be the same as the server localport.
i.e.:
Server listens on port n (local port relative to server)
Client connects to server on port n (remote port relative to client)
To answer your question, the difference is in name, based on perspective.
This seems to be a good place to start with VB6 socket communication
I'm trying to implement TCP hole punching with windows socket using mingw toolchain. I think the process is right but the hole doesn't seems to take. I used this as reference.
A and B connect to the server S
S sends to A, B's router IP + the port it used to connect to S
S does the same for B
A start 2 threads:
One thread tries connecting to B's router with the info sent by S
The other thread is waiting for an incoming connection on the same port used to connect to its router when it connected to S
B does the same
I have no issue in the code I think since:
A and B does get each other ip and port to use
They are both listening on the port they used to connect to their router when they contacted the server
They are both connecting to the right ip and port but get timed out (code error 10060)
I am missing something ?
EDIT: With the help of process explorer, I see that one of the client managed to establish a connection to the peer. But the peer doesn't seems to consider the connection to be made.
Here is what I captured with Wireshark. For the sake of the example, the server S and the client A are on the same PC. The server S listens on a specific port (8060) redirected to that PC. B still tries to connect on the right IP because it sees that the public address of A sent by S is localhost and therefore uses the public IP of S instead. (I have replaced the public IPs by placeholders)
EDIT 2: I think the confusion is due to the fact that both incoming and outcoming connection request data are transfered on the same port. Which seems to mess up the connection state because we don't know which socket will get the data from the port. If I quote msdn:
The SO_REUSEADDR socket option allows a socket to forcibly bind to a
port in use by another socket. The second socket calls setsockopt with
the optname parameter set to SO_REUSEADDR and the optval parameter set
to a boolean value of TRUE before calling bind on the same port as the
original socket. Once the second socket has successfully bound, the
behavior for all sockets bound to that port is indeterminate.
But talking on the same port is required by the TCP Hole Punching technique to open up the holes !
A start 2 threads:
One thread tries connecting to B's router with the info sent by S
The other thread is waiting for an incoming connection on the same port used to connect to its router when it connected to S
You can't do this with two threads, since it's just one operation. Every TCP connection that is making an outbound connection is also waiting for an incoming connection. You simply call 'connect', and you are both sending outbound SYNs to make a connection and waiting for inbound SYNs to make a connection.
You may, however, need to close your connection to the server. Your platform likely doesn't permit you to make a TCP connection from a port when you already have an established connection from that same port. So just as you start TCP hole punching, close the connection to the server. Bind a new TCP socket to that same port, and call connect.
A simple solution to traverse into NAT routers is to make your traffic follow a protocol that your NAT already has an algorithm for forwarding, such as FTP.
Use Wireshark to check tcp connection request(3-way Handhsake process) is going properly.
Ensure your Listener thread is having select() to de-multiplex the descriptor.
sockPeerConect(socket used to connect Other peer) is FD_SET() in Listener Thread.
Ensure your are checking
int Listener Thread()
{
while(true)
{
FD_SET(sockPeerConn);
FD_SET(sockServerConn);
FD_SET(nConnectedSock );
if (FD_ISSET(sockPeerConect)
{
/// and calling accept() in side the
nConnectedSock = accept( ....);
}
if (FD_ISSET(sockServerConn)
{
/// receive data from Server
recv(sockServerConn );
}
if (FD_ISSET(nConnectedSock )
{
/// Receive data from Other Peer
recv(nConnectedSock );
}
}
}
5.Ensure you are simultaneously starting peer connection A to B and B to A.
6.Start your Listener Thread Prior to Connection to server and Peer and have Single Listener Thread for receiving Server and Client.
not every router supports tcp hole punching, please check out the following paper which explains in detail:
Peer-to-Peer Communication Across Network Address Translators
Why is it that TCP connections to a loopback interface end up in TIME_WAIT (socket closed with SO_DONTLINGER set), but identical connections to a different host do not end up in TIME_WAIT (they are reset/destroyed immediately)?
Here are scenarios to illustrate:
(A) Two applications, a client and a server, are both running on the same Windows machine. The client connects to the server via the server's loopback interface (127.0.0.1, port xxxx), sends data, receives data, and closes the socket (SO_DONTLINGER is set).
Let's say that the connections are very short-lived, so the client app is establishing and destroying a large number of connections each second. The end result is that the sockets end up in TIME_WAIT, and the client eventually exhausts its max number of sockets (on Windows, this is ~3900 by default, and we are assuming that this value will not be changed in the registry).
(B) Same two applications as scenario (A), but the server is on a different host (the client is still running on Windows). The connections are identical in every way, except that they are not destined for 127.0.0.1, but some other IP instead. Here the connections on the client machine do NOT go into TIME_WAIT, and the client app can continue to make connections indefinitely.
Why the discrepancy?
The TIME_WAIT state only occurs at one end of the connection -- the end that closes first. For the loopback interface both ends are on the same machine so you will always see TIME_WAIT.
In your other case, try looking at the other machine. I think you'll see the TIME_WAIT sockets there.